Rationale for aversion to the central database?
I am a Rails developer at a medium-large size company. I’ve mostly worked at smaller companies. I’ve some exposure to other web development communities.
When it comes to databases, I have universally encountered the attitude that one should treat the database as a dumb data bucket. There is a *very* strong aversion to putting much of any business logic in the database. I encounter substantial aversion to have multiple applications access one database, or even the reverse: all abstraction should be at the application layer.
My best theory is that these communities developed at a time when Windows was more dominant, and just generally it was *significantly* easier to use MySQL than Postgres for many, particularly new, developers. And it is pretty reasonable to adopt an aversion to sophisticated use of the database in that case.
This attitude has just continued to today, even as many of them have switched to Postgres.
This is only a hypothesis. I am now officially researching the issue. I would be grateful for any wisdom from this community.
Aside: it is rare to find a situation in life or anywhere where one widely adopted thing is worse in *every way* than another thing, but this certainly was and largely still continues to be the case when one compares MySQL and Postgres. So why do folks continue to use MySQL? I find this mystifying.
Let's see.... There are two major issues when writing business logic in database:
1. Portability. Being tied to a single database engine is not always a good idea. When you write business logic in database, you have to write and maintain your store procedures for every database engine you want to support. That can be really complicated, and will surely take pretty much time, as programming languages for different databases are very different from each other. And it's permanent: Every time you make a change to a store procedure, you must make that change for every supported database.
2. Performance. I still don't have it clear, but, presumably, plpgsql(among others) may not have the same performance as Java, Ruby, Python, C++, or any other programming language. Also, when your application runs outside of the database server, having business logic on database will make your app use more resources from the database server, thus rendering is slow for all other operations. This goes against separating database and application.
However, there are some cases when you may want or need to use business logic on database: when you need to fetch large volumes of data to produce some report. This is the case of some accounting reports in complex ERPs. The only advantage store procedures have is they run INSIDE the database, so there's no TCP/IP overhead and no network latency when the store procedure make a large query. Even running in the same host, fetching large volumes of data will always be faster from a store procedure.
Setting this considerable advantage of store procedures, I still try to avoid business logic programming on database. In the very specific cases when I need to take advantage of this, I try to make it the most simple, more near to data collecting than business logic, so the application receives processed or summarized data, and processes it as needed.
Regards,
Alvaro Aguayo
Operations Manager
Open Comb Systems E.I.R.L.
Office: (+51-1) 3377813 | Mobile: (+51) 995540103 | (+51) 954183248
Web: www.ocs.pe
----- Original Message -----
From: "Guyren Howe" <guyren@gmail.com>
To: "PostgreSql-general" <pgsql-general@postgresql.org>
Sent: Sunday, 8 April, 2018 16:39:49
Subject: Rationale for aversion to the central database?
I am a Rails developer at a medium-large size company. I’ve mostly worked at smaller companies. I’ve some exposure to other web development communities.
When it comes to databases, I have universally encountered the attitude that one should treat the database as a dumb data bucket. There is a *very* strong aversion to putting much of any business logic in the database. I encounter substantial aversion to have multiple applications access one database, or even the reverse: all abstraction should be at the application layer.
My best theory is that these communities developed at a time when Windows was more dominant, and just generally it was *significantly* easier to use MySQL than Postgres for many, particularly new, developers. And it is pretty reasonable to adopt an aversion to sophisticated use of the database in that case.
This attitude has just continued to today, even as many of them have switched to Postgres.
This is only a hypothesis. I am now officially researching the issue. I would be grateful for any wisdom from this community.
Aside: it is rare to find a situation in life or anywhere where one widely adopted thing is worse in *every way* than another thing, but this certainly was and largely still continues to be the case when one compares MySQL and Postgres. So why do folks continue to use MySQL? I find this mystifying.
1. With a micro service based architecture these days, it is difficult to
justify putting all logic in a central database as you can only scale up in
a database. Business logic in things like Spark can make a claim for scale
out solution.
2. All RDBMS have a non portable stored proc language, making migration a
pain.
On 04/08/2018 02:39 PM, Guyren Howe wrote:
I am a Rails developer at a medium-large size company. I’ve mostly
worked at smaller companies. I’ve some exposure to other web development
communities.When it comes to databases, I have universally encountered the attitude
that one should treat the database as a dumb data bucket. There is a
*very* strong aversion to putting much of any business logic in the
database. I encounter substantial aversion to have multiple applications
access one database, or even the reverse: all abstraction should be at
the application layer.My best theory is that these communities developed at a time when
Windows was more dominant, and just generally it was *significantly*
easier to use MySQL than Postgres for many, particularly new,
developers. And it is pretty reasonable to adopt an aversion to
sophisticated use of the database in that case.This attitude has just continued to today, even as many of them have
switched to Postgres.This is only a hypothesis. I am now officially researching the issue. I
would be grateful for any wisdom from this community.Aside: it is rare to find a situation in life or anywhere where one
widely adopted thing is worse in *every way* than another thing, but
this certainly was and largely still continues to be the case when one
compares MySQL and Postgres. So why do folks continue to use MySQL? I
find this mystifying.
In general I see it as a case of people working where they are
comfortable. So folks that understand and like SQL do the heavy lifting
there and use application logic to just work with the output of the
database business logic. Folks that are comfortable with a language
other then SQL use that language to do the business logic and see the
database as just the dumb data store you refer to. The rise of
frameworks over databases has also contributed to this in my opinion.
Mostly because they encourage the notion that there is such a thing as
universal SQL that operates independent of the underlying database.
While it is possible it usually leads to a very simple SQL model that
can work over multiple database engines. So again you end up with the
database as a data bucket.
--
Adrian Klaver
adrian.klaver@aklaver.com
On April 8, 2018 02:40:46 pm PDT, "Guyren Howe" <guyren@gmail.com> wrote:
I am a Rails developer at a medium-large size company. I’ve mostly
worked at smaller companies. I’ve some exposure to other web
development communities.
When it comes to databases, I have universally encountered the attitude
that one should treat the database as a dumb data bucket. There is a
*very* strong aversion to putting much of any business logic in the
database. I encounter substantial aversion to have multiple
applications access one database, or even the reverse: all abstraction
should be at the application layer.
My best theory is that these communities developed at a time when
Windows was more dominant, and just generally it was *significantly*
easier to use MySQL than Postgres for many, particularly new,
developers. And it is pretty reasonable to adopt an aversion to
sophisticated use of the database in that case.
This attitude has just continued to today, even as many of them have
switched to Postgres.
This is only a hypothesis. I am now officially researching the issue. I
would be grateful for any wisdom from this community.
Aside: it is rare to find a situation in life or anywhere where one
widely adopted thing is worse in *every way* than another thing, but
this certainly was and largely still continues to be the case when one
compares MySQL and Postgres. So why do folks continue to use MySQL? I
find this mystifying.
===============
Hi there. This issue is close to my heart and I'm with you. I am
however very comfortable with using psql and PL/pgSQL and I am very
opinionated.
I feel *very* strongly that a database that actually matters and where
RI is critical, i.e., any PG db I handle, should make sense on its own
and be *highly* usable on its own. It should not be dependent on some
particular external application code to use it or make sense of things.
It follows that I think nonintuituve exceptions/gotchas should be
*clear* at a db level, likely using functions to encapsulate that
information.
Sure, PL/pgSQL may possibly be slow at some things like doing lots of
bigint math, but I would probably use C and ECPG for the appropriate
cases.
Not a large percentage of programmers these days know how fast db tasks
can be because they are used to working with relatively slow tools and
frameworks. ( Yes, typical Python.)
I am also highly mystified by the dumbstore approach and frankly, I
think that folks should KNOW their tools better. Not knowing how to use
your database effectively typically results in unnecessary and often
very convoluted application code, from my experience.
I keep hearing about db portability but I have yet to see cases where
db logic was an issue. But to be honest, I haven't seen many migrations
at all. Why? Because I think that it rarely ever happens. If I had to
do it, I sure as heck hope that the db was "clean" and understandable
without
having to review some probably awful app. code. Why would anyone
migrate *away* from PG anyway? :)
One advantage to using logic and functions in the db is that you can
fix things immediately without having to make new application builds.
That in itself is a huge advantage, IMO.
Cheers,
-g
Import Notes
Resolved by subject fallback
I am however very comfortable with using psql and PL/pgSQL and I am
very opinionated.
Nothing wrong with this approach and it may very well work 90% of the
time. Until ... a day comes when
you need to migrate out of PG to another RDBMS. Good luck at that time.
Greetings,
* Alvaro Aguayo Garcia-Rada (aaguayo@opensysperu.com) wrote:
1. Portability. Being tied to a single database engine is not always a good idea. When you write business logic in database, you have to write and maintain your store procedures for every database engine you want to support. That can be really complicated, and will surely take pretty much time, as programming languages for different databases are very different from each other. And it's permanent: Every time you make a change to a store procedure, you must make that change for every supported database.
The portability claim tends to be both a false one and often, when
realized, results in a solution where you aren't using the database for
anything complicated and you'd be better off with a much simpler data
store. You also don't actually offer any justification for the claim
that being tied to a single database engine is not always a good idea-
why is that? With commercial databases it tends to be because you are
at the behest of some very large commercial company- but that isn't an
issue with PostgreSQL. The next argument may be that the project might
go away and force a move to another database, but PG's track record is
hard to beat in this area given the history and the number of people
working to make it better and keep it maintained year after year.
Ultimately, you really should be thinking of the database as the
language of your data. You wouldn't write your app in multiple
different programming languages, would you? What if Python suddently
goes away as a language, or Java does? Would you write every piece of
software twice, so that you can flip over to using Python instead of
Java on a whim?
2. Performance. I still don't have it clear, but, presumably, plpgsql(among others) may not have the same performance as Java, Ruby, Python, C++, or any other programming language. Also, when your application runs outside of the database server, having business logic on database will make your app use more resources from the database server, thus rendering is slow for all other operations. This goes against separating database and application.
No, plpgsql doesn't have the same performance characteristics as Java,
Ruby, Python, C++, or many other languages- but that's why it isn't the
only language which is supported in PostgreSQL. You can write back-end
functions in another of those languages, plus quite a few others
including Javascript, R, Perl. As for the question about if it'll
actually result in the database server being more taxed or not- that
really depends. Aggregates are a great example- is it going to be
cheaper for the database to run 'sum()' across a data set and give you
the result, or for it to take every row from that table and ship it to
a client? There are certainly examples which can go the other way too,
of course, but it's really something to think about on an individual
basis, not to make a broad stroke decision about, particularly when
you're arguing that you'll get better performance by moving the code
away from the data, that tends to not be the case.
However, there are some cases when you may want or need to use business logic on database: when you need to fetch large volumes of data to produce some report. This is the case of some accounting reports in complex ERPs. The only advantage store procedures have is they run INSIDE the database, so there's no TCP/IP overhead and no network latency when the store procedure make a large query. Even running in the same host, fetching large volumes of data will always be faster from a store procedure.
This is what I'm getting at above, but I would caution that looping over
a table in a stored procedure is generally much less performant than
finding a way to express what you want in SQL.
I'm afraid that the other advantages of doing more in the database
aren't really being considered in your arguments above either- things
like having constraints all checked in one place, regardless of the
application, and the ability to have interfaces defined which multiple
applications could operate against and know that they're all going to be
getting back the same, consistent, results from the database since it's
the same code underneath. Some of that can be done by sharing code
between the applications, of course, but there's a great deal more risk
there (what about when the applications need to change something that's
done in that shared code, so they essentially fork it..? Or the
applications have to be upgraded at different times, or a variety of
other situations which could lead to that common code diverging, or even
when the applications aren't written in the same language...).
Just some food for thought.
Thanks!
Stephen
Greetings,
* Ravi Krishna (sravikrishna3@gmail.com) wrote:
I am however very comfortable with using psql and PL/pgSQL and I am very
opinionated.
Nothing wrong with this approach and it may very well work 90% of the time.
Until ... a day comes when
you need to migrate out of PG to another RDBMS. Good luck at that time.
Oh, don't worry, people do that. What I particularly enjoy are the
stories (of which there are a number now...) where people moved away
from PG for some reason or another, then to another database, to
another, and another, and finally back to PG again, much the wiser for
it but also rather battle-worn. :)
Thanks!
Stephen
On 9 April 2018 at 07:39, Guyren Howe <guyren@gmail.com> wrote:
I am a Rails developer at a medium-large size company. I’ve mostly worked
at smaller companies. I’ve some exposure to other web development
communities.When it comes to databases, I have universally encountered the attitude
that one should treat the database as a dumb data bucket. There is a *very*
strong aversion to putting much of any business logic in the database. I
encounter substantial aversion to have multiple applications access one
database, or even the reverse: all abstraction should be at the application
layer.My best theory is that these communities developed at a time when Windows
was more dominant, and just generally it was *significantly* easier to use
MySQL than Postgres for many, particularly new, developers. And it is
pretty reasonable to adopt an aversion to sophisticated use of the database
in that case.This attitude has just continued to today, even as many of them have
switched to Postgres.This is only a hypothesis. I am now officially researching the issue. I
would be grateful for any wisdom from this community.Aside: it is rare to find a situation in life or anywhere where one widely
adopted thing is worse in *every way* than another thing, but this
certainly was and largely still continues to be the case when one compares
MySQL and Postgres. So why do folks continue to use MySQL? I find this
mystifying.
It is interesting looking at many of the responses to this thread. I see a
lot at each extreme - either put lots of stuff inthe database or use the
database as just a 'dumb' store and put everything in the application code.
I think the real solution is somewhere in the middle. I've lost count of
the number of applications where the application code is jumping through
all sorts of hoops to do basic data operations which would be far better
handled in the database and can easily be done using just ANSI SQL (so is
portable). It drives me crazy when people tell me the database is slow when
they are doing 'select * from table' and then filtering and sorting the
data in their application. Applications should take advantage of what the
database does well. Unfortunately, I see far too many developers who are
uncomfortable with SQL, don't know how to structure their queries
efficiently (lots of nested sub queries etc, cartesian joins etc).
At the other extreme is those who tend to put almost everything in the
database - including business policy and business 'rules' which are
probably better categorised as current business strategy. First, I think it
is nearly always a mistake to try and enforce business policy with
technology. Policies change too often and should be dealt with via
administrative measures. Technology can certainly be used to raise alerts
regarding policy breeches, but should not be used to enforce policies.
Likewise, some business rules are more akin to strategies than being actual
static rules and can change with little notice, rhyme or reason. These
probably should not be 'hard coded' into the database. Other rules are more
stable and unlikely to ever change and are likely good candidates for being
encoded in the database as either functions or constraints.
I do feel that often the big problem is with management who fail to
understand the time and effort needed to develop a good data model.
Developers are put under pressure to deliver functionality and as long as
it looks correct at the interface level, all is good. Little thought is
really put into long term maintenance or performance. From a developer
perspective, time put into becoming an expert in React, Angular, Node,
Python etc is probably going to earn them more bonus points than time spent
on developing skills in defining good data models or understanding of the
power/functionality of the underlying database engine. Of course, this does
tend to be short sighted as a good data model will tend to make it easier
to add/enhance an application and understanding your database system will
make changes and enhancements less daunting.
For me, the sign of a good developer is one who is able to get the balance
right. They understand the strengths and weaknesses of ALL the components
involved and are able to select the technology mix which suits the problem
domain and are able to get the right balance between business
responsiveness to change and long term maintenance/viability.
Unfortunately, such developers are rare, so it will usually mean there are
a team of people with different skills and what will matter is how well
they are able to work together as a team and come up with an architecture
which satisfies the business requirements.
--
regards,
Tim
--
Tim Cross
Tim, I'm sorry if I sound like a cheerleader, but boy did you nail this. I
would basically say exactly the same thing, just not as well.
On Sun, Apr 8, 2018 at 9:37 PM, Tim Cross <theophilusx@gmail.com> wrote:
Show quoted text
On 9 April 2018 at 07:39, Guyren Howe <guyren@gmail.com> wrote:
I am a Rails developer at a medium-large size company. I’ve mostly worked
at smaller companies. I’ve some exposure to other web development
communities.When it comes to databases, I have universally encountered the attitude
that one should treat the database as a dumb data bucket. There is a *very*
strong aversion to putting much of any business logic in the database. I
encounter substantial aversion to have multiple applications access one
database, or even the reverse: all abstraction should be at the application
layer.My best theory is that these communities developed at a time when Windows
was more dominant, and just generally it was *significantly* easier to use
MySQL than Postgres for many, particularly new, developers. And it is
pretty reasonable to adopt an aversion to sophisticated use of the database
in that case.This attitude has just continued to today, even as many of them have
switched to Postgres.This is only a hypothesis. I am now officially researching the issue. I
would be grateful for any wisdom from this community.Aside: it is rare to find a situation in life or anywhere where one
widely adopted thing is worse in *every way* than another thing, but this
certainly was and largely still continues to be the case when one compares
MySQL and Postgres. So why do folks continue to use MySQL? I find this
mystifying.It is interesting looking at many of the responses to this thread. I see a
lot at each extreme - either put lots of stuff inthe database or use the
database as just a 'dumb' store and put everything in the application code.I think the real solution is somewhere in the middle. I've lost count of
the number of applications where the application code is jumping through
all sorts of hoops to do basic data operations which would be far better
handled in the database and can easily be done using just ANSI SQL (so is
portable). It drives me crazy when people tell me the database is slow when
they are doing 'select * from table' and then filtering and sorting the
data in their application. Applications should take advantage of what the
database does well. Unfortunately, I see far too many developers who are
uncomfortable with SQL, don't know how to structure their queries
efficiently (lots of nested sub queries etc, cartesian joins etc).At the other extreme is those who tend to put almost everything in the
database - including business policy and business 'rules' which are
probably better categorised as current business strategy. First, I think it
is nearly always a mistake to try and enforce business policy with
technology. Policies change too often and should be dealt with via
administrative measures. Technology can certainly be used to raise alerts
regarding policy breeches, but should not be used to enforce policies.
Likewise, some business rules are more akin to strategies than being actual
static rules and can change with little notice, rhyme or reason. These
probably should not be 'hard coded' into the database. Other rules are more
stable and unlikely to ever change and are likely good candidates for being
encoded in the database as either functions or constraints.I do feel that often the big problem is with management who fail to
understand the time and effort needed to develop a good data model.
Developers are put under pressure to deliver functionality and as long as
it looks correct at the interface level, all is good. Little thought is
really put into long term maintenance or performance. From a developer
perspective, time put into becoming an expert in React, Angular, Node,
Python etc is probably going to earn them more bonus points than time spent
on developing skills in defining good data models or understanding of the
power/functionality of the underlying database engine. Of course, this does
tend to be short sighted as a good data model will tend to make it easier
to add/enhance an application and understanding your database system will
make changes and enhancements less daunting.For me, the sign of a good developer is one who is able to get the balance
right. They understand the strengths and weaknesses of ALL the components
involved and are able to select the technology mix which suits the problem
domain and are able to get the right balance between business
responsiveness to change and long term maintenance/viability.
Unfortunately, such developers are rare, so it will usually mean there are
a team of people with different skills and what will matter is how well
they are able to work together as a team and come up with an architecture
which satisfies the business requirements.--
regards,Tim
--
Tim Cross
On 2018-04-08 14:39:49 -0700, Guyren Howe wrote:
I am a Rails developer at a medium-large size company. I’ve mostly worked at
smaller companies. I’ve some exposure to other web development communities.When it comes to databases, I have universally encountered the attitude that
one should treat the database as a dumb data bucket. There is a *very* strong
aversion to putting much of any business logic in the database. I encounter
substantial aversion to have multiple applications access one database, or even
the reverse: all abstraction should be at the application layer.
Like Tim I would like to make a case for finding a middle ground. The
sweet spot may be different from project to project and it is certainly
different for different teams or companies.
I'll try to outline what I like to solve within the database and what I
like to put into the application, and why:
Everything which is easy to specify *declaratively* goes into the
database: Tables (of course), views, constraints, etc. These are
basically type declarations for the data which is stored in the
database, so it belongs there.
Simple stored procedures and triggers go into the database, too. They
should make using the database easier and should not surprise the user.
This is mostly optional.
Anything I would consider "application logic" (when the user does X,
then the system should do Y) goes into the application. There is often a
distinction between a backend (or data access layer) and a frontend. The
backend could theoretically be within the database and be called via
stored procedures, but I find it much easier to test and debug code
outside of the database (this may be just my lack of knowledge about
available tools).
In my applications I use SQL heavily. RDBMs are good at processing
queries, so use them for that. If all you want is a key-value store,
don't use PostgreSQL. I'm not very fond of ORMs. I know what I want to
do and can express it in SQL. An ORM makes me translate that into a
different (and usually inferior) query language, which is then
translated back into SQL. That doesn't make things easier for me.
My best theory is that these communities developed at a time when Windows was
more dominant, and just generally it was *significantly* easier to use MySQL
than Postgres for many, particularly new, developers.
I come from Oracle, not MySQL, But I have also used MySQL, and I guess
the very wide gap in capabilities between Oracle and MySQL made me
cautious about putting too much into the database. There is also the
expectation that you should be able to use a different database engine
(SQL is a standard, right?) just like you should be able to use a
different C compiler, but in practice that never works. And of course I
wasn't very impressed with PL/SQL. (PostgreSQL gives you a much wider
range of languages for stored procedures than Oracle, but PL/PerlU still
isn't quite the same as Perl (And I suspect it's the same for Python).
hp
--
_ | Peter J. Holzer | we build much bigger, better disasters now
|_|_) | | because we have much more sophisticated
| | | hjp@hjp.at | management tools.
__/ | http://www.hjp.at/ | -- Ross Anderson <https://www.edge.org/>
Maintaining your database logic in version control and versioning the
deployed code can be a bit problematic.
Conversely, providing a standardized pgsql module through which data is
updated and retrieved can help standardize access logic across multiple
languages and libraries.
And I concur that database portability is a thing people like to discuss,
but rarely occurs.
Portability is important for general ORM tools, less so for corporate
projects (there are always exceptions).
Like any tool, needs to be used wisely.
I've worked on a project that has 10s of thousands of lines of business
logic in the database and it makes perfect sense for that environment.
--Ray
On Mon, Apr 9, 2018 at 9:45 AM, Ray Cote <rgacote@appropriatesolutions.com>
wrote:
Maintaining your database logic in version control and versioning the
deployed code can be a bit problematic.Conversely, providing a standardized pgsql module through which data is
updated and retrieved can help standardize access logic across multiple
languages and libraries.And I concur that database portability is a thing people like to discuss,
but rarely occurs.
Portability is important for general ORM tools, less so for corporate
projects (there are always exceptions).Like any tool, needs to be used wisely.
I've worked on a project that has 10s of thousands of lines of business
logic in the database and it makes perfect sense for that environment.
--Ray
*While this discussion is very interesting, it is important to realize that
ultimately, it is the needs and policy of the company that decides how that
database is used. The primary purpose of a DBA is to install the proper
security, protect the integrity of the data and maintain
performance.Determining whether it is better to place business logic in the
database or the application is strictly on a case by case basis. *
--
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!
On 2018-04-08 19:39:43 -0400, Stephen Frost wrote:
* Alvaro Aguayo Garcia-Rada (aaguayo@opensysperu.com) wrote:
1. Portability. Being tied to a single database engine is not always
a good idea. When you write business logic in database, you have to
write and maintain your store procedures for every database engine
you want to support. That can be really complicated, and will surely
take pretty much time, as programming languages for different
databases are very different from each other. And it's permanent:
Every time you make a change to a store procedure, you must make
that change for every supported database.The portability claim tends to be both a false one and often, when
realized, results in a solution where you aren't using the database for
anything complicated and you'd be better off with a much simpler data
store. You also don't actually offer any justification for the claim
that being tied to a single database engine is not always a good idea-
why is that? With commercial databases it tends to be because you are
at the behest of some very large commercial company- but that isn't an
issue with PostgreSQL.
Many companies don't like to support multiple database engines. There is
the license issue, true, but having DBAs is even more important. If you
are an Oracle shop you have your Oracle DBAs, and they know how to make
backups (and how to restore them), how to monitor the database, how to
track down performance issues, etc. If some application needs MS-SQL or
PostgreSQL or MySQL, they won't have that expertise, so they will have
to be trained, or you need additional DBAs (maybe on a part-time or
support contract basis). So all other things being equal, companies will
prefer applications which work with databases they already use.
So if you are an application developer, it makes sense to develop your
application to work with several databases. It vastly expands your
potential clientele.
At least that was the situation 10 years ago. These days much software
is offered as a service. If the customer sees only a REST API and
doesn't have to host the database on their own servers, they won't care
about the RDBMS underneath.
hp
--
_ | Peter J. Holzer | we build much bigger, better disasters now
|_|_) | | because we have much more sophisticated
| | | hjp@hjp.at | management tools.
__/ | http://www.hjp.at/ | -- Ross Anderson <https://www.edge.org/>
Peter J. Holzer <hjp-pgsql@hjp.at> writes:
In my applications I use SQL heavily. RDBMs are good at processing
queries, so use them for that. If all you want is a key-value store,
don't use PostgreSQL. I'm not very fond of ORMs. I know what I want to
do and can express it in SQL. An ORM makes me translate that into a
different (and usually inferior) query language, which is then
translated back into SQL. That doesn't make things easier for me.
Could not agree more! My experience has been that ORMs just get in the
way. Worse yet, when I've investigated performance problems raised by
developers, I've often found it is due to the ORM layer, which is unable
to map more complex queries efficiently.
The only 'layer' I've ever used which I liked was HugSQL. I quite liked
this approach as you write the queries in SQL and these get exposed to
the application layer as high level functions, so gives a nice clean interface.
I come from Oracle, not MySQL, But I have also used MySQL, and I guess
the very wide gap in capabilities between Oracle and MySQL made me
cautious about putting too much into the database. There is also the
expectation that you should be able to use a different database engine
(SQL is a standard, right?) just like you should be able to use a
different C compiler, but in practice that never works. And of course I
wasn't very impressed with PL/SQL. (PostgreSQL gives you a much wider
range of languages for stored procedures than Oracle, but PL/PerlU still
isn't quite the same as Perl (And I suspect it's the same for Python).hp
Again, totally agree. Nice in theory and reminds me of the 'write once,
run everywhere' dream. Very few of the places I've worked have actually
maintained cross database functionality for long, if at all. The problem
is that while SQL may have a standard, how that standard is implemented
is very different. When I have worked at places which tried to be
database neutral, they inevitably give up as they find that in the end,
they needed to maintain separate SQL or have separate database
maintenance teams anyway. You will only get seamless SQL across
different databases if your SQL is very basic, in which case, you
probably don't need a full blown RDMS anyway. Most of the time, your
choice of database will be dictated by your dominate platform in the
market your application targets.
--
Tim Cross
So far, I have nothing to add, but just one thing. See below:
On 09.04.2018 00:37, g@luxsci.net wrote:
One advantage to using logic and functions in the db is that you can
fix things immediately without having to make new application builds.
That in itself is a huge advantage, IMO.
This is actually not the case. You want to have those logic tested as
thoroughly as possible being so close to your precious data.
So, you write migration code that substitutes the old logic, test the
whole package, if successful, deploy (and thus run the migration).
Cheers,
Sven
I have done some big contracts for large financial companies, and for most
of them, ANY changes to the DB structure required extensive 3rd party
testing and a change control process that sometimes took weeks.
But we did get a waiver for the use of DB 'code' like stored procedures and
views, which only had to follow the standard development test / acceptance
procedure by separate developer, end user and third party test teams.
For me, the database is more immutable than the application logic and
especially the GUI, so it pays to spend a lot of time up front on DB
design. Past experience has also lead me to expect that the DBMS will have
a much longer shelf life than the application language / toolsets used
against it, or at least, over time the languages / toolsets tend to
multiply.
For my part, I like to spend a lot of tie in getting an optimal DB design,
and also putting a lot of validation logic into the DB.
I also like making expensive use of stored procedures, my experience is
that for a data-intensive multi-tool application they are faster and more
secure...
On 23 April 2018 at 19:22, Sven R. Kunze <srkunze@mail.de> wrote:
Show quoted text
So far, I have nothing to add, but just one thing. See below:
On 09.04.2018 00:37, g@luxsci.net wrote:
One advantage to using logic and functions in the db is that you can fix
things immediately without having to make new application builds. That in
itself is a huge advantage, IMO.This is actually not the case. You want to have those logic tested as
thoroughly as possible being so close to your precious data.So, you write migration code that substitutes the old logic, test the
whole package, if successful, deploy (and thus run the migration).Cheers,
Sven
On Sun, Apr 8, 2018 at 15:37 g@luxsci.net <g@luxsci.net> wrote:
On April 8, 2018 02:40:46 pm PDT, "Guyren Howe" <guyren@gmail.com> wrote:
One advantage to using logic and functions in the db is that you can fix
things immediately without having to make new application builds. That in
itself is a huge advantage, IMO.
I doubt most of us would consider this any kind of advantage outside of the
momentary temptation to do it when an app is completely broken and needs to
be up in a hurry. Application changes, whether in the dB or in application
logic, need to be tested, and they need to be revision controlled and
released in a manner that can be easily rolled back in an automated manner.
The fact that putting logic in the database can effectively allow
developers to make unreleased changes to production apps is specifically
one of the problems that I am trying to avoid when I keep most logic in the
app instead of the dB. It’s a whole lot harder to make arbitrary manual
changes to code in the app, whether interpreted or compiled, if it is
running inside a container that cannot be updated. Even if you go in with a
shell and update an interpreted file, the next time that container is
launched the change will be lost, which is usually sufficient motivation to
keep devs from doing that kind of thing.
I’ll put some things in the db, either for performance or because I want
that logic to be built into the data and not be part of the application,
but I choose those contexts carefully and I write them in as portable a
manner as possible. And for those who say migrations don’t happen, I’ve
certainly been through a few, usually as part of an acquisition or the
like, but sometimes simply because another dB server better meets our needs
after a time. And migrating stored procs can be really difficult. Such code
usually has less complete unit and integration tests, which makes
validating those changes more difficult, too.
But the biggest reason is that databases often have to scale up rather than
out, so keeping as much logic in the application code allows my scaling
requirements for the dB server to be as minimal as possible. Sure, there
are workloads where pushing raw data across the wire will be more work than
processing it in the dB, and in those cases, I may do that, but I consider
it premature optimization to just assume that is necessary without hard
evidence from production examples to suggest otherwise.
Finally, there’s the consistency argument. I want to find all of the logic
in one place. Either entirely in the source code or entirely in the dB.
Having to trace things from the code to the dB and back again can make it a
whole lot harder to see, at a glance, what is happening in the code. Having
logic in the dB also means it can be difficult or impossible to have two
releases talking to the same schema version at the same time - so canary
builds and rolling deployments can be difficult. Of course, schema changes
can cause this problem, regardless of whether there are stored procs, but
the more of your logic that lives in the db, the more likely it is that
your releases will conflict over the db. So I’m more likely to be able to
do a rolling release if I keep the db as a dumb data store and keep logic
in the application code.
On Sun, 8 Apr 2018 14:39:49 -0700
Guyren Howe <guyren@gmail.com> wrote:
I am a Rails developer at a medium-large size company. I’ve mostly
worked at smaller companies. I’ve some exposure to other web
development communities.When it comes to databases, I have universally encountered the
attitude that one should treat the database as a dumb data bucket.
There is a *very* strong aversion to putting much of any business
logic in the database. I encounter substantial aversion to have
multiple applications access one database, or even the reverse: all
abstraction should be at the application layer.My best theory is that these communities developed at a time when
Windows was more dominant, and just generally it was *significantly*
easier to use MySQL than Postgres for many, particularly new,
developers. And it is pretty reasonable to adopt an aversion to
sophisticated use of the database in that case.
The biggest single problem in most cases is a combination
of communiction and attitude.
Recall that most programmers are not DBA's and don't have access to
create anything in the database. Traditionally the High Priests of
Holy Data don't allow mere peons to create or modify content in the
databases. Result is that programmers *have* to do most of their
logic in code, even if the results put hugely more load on the servers.
The DBA's are also not Well Qualified Genious Programmers who obviously
know much more than mere DBA's about what the real use of data is
supposed to be. They don't need no stinkin' SQL! They have the magic
of ORM!
Net result: People talk past one another.
I've nearly been lynched for creating ten-row temporary tables without
prior written permission or even suggesting that we might add indexes
to support more effecient use of views.
I've also nearly strung programmers up for running hundreds of
"select * from table" statements because they weren't willing to
figure out how to write a join or simple filter or give me enough
data to do it for them.
Good example are ORM packages: Most of really hideous code required
for joins, group by, coalesce can easily be pushed into views allowing
simple select X from Y where or one-line joins. DBA's can craft views
that make sense, programmers could use them to simplify their code.
Q: When is the last place you worked where DBAs were pigs (vs.
chickens)?
Q: How often was a DBA involved in the analysis and design of cycle
of reporting?
Q: How many programming projects have you been on where the hackers
were willing to write clear, concise data reqirements and keep
them up to date?
When that happens people will use the databases as tools, ORM code
will be [semi-] maintainable, and we'll live happily for ever after.
--
Steven Lembark 1505 National Ave
Workhorse Computing Rockford, IL 61103
lembark@wrkhors.com +1 888 359 3508
All,
Just chiming in . . . we’ve taken a somewhat different approach and actually encourage our programmers to build out thier own DBs. We’re using Postgres to aggregate many varied datasources into postgres as a cahing system, and then develop against this aggregated data. Yes, we understand that the realtime stuff is a laggy process. The intent here is to give the programmers a background in DB operation and performance expectations, especially since we primarilry build for Web Use.
bobb
On Apr 27, 2018, at 9:55 AM, Steven Lembark <lembark@wrkhors.com<mailto:lembark@wrkhors.com>> wrote:
On Sun, 8 Apr 2018 14:39:49 -0700
Guyren Howe <guyren@gmail.com<mailto:guyren@gmail.com>> wrote:
I am a Rails developer at a medium-large size company. I’ve mostly
worked at smaller companies. I’ve some exposure to other web
development communities.
When it comes to databases, I have universally encountered the
attitude that one should treat the database as a dumb data bucket.
There is a *very* strong aversion to putting much of any business
logic in the database. I encounter substantial aversion to have
multiple applications access one database, or even the reverse: all
abstraction should be at the application layer.
My best theory is that these communities developed at a time when
Windows was more dominant, and just generally it was *significantly*
easier to use MySQL than Postgres for many, particularly new,
developers. And it is pretty reasonable to adopt an aversion to
sophisticated use of the database in that case.
The biggest single problem in most cases is a combination
of communiction and attitude.
Recall that most programmers are not DBA's and don't have access to
create anything in the database. Traditionally the High Priests of
Holy Data don't allow mere peons to create or modify content in the
databases. Result is that programmers *have* to do most of their
logic in code, even if the results put hugely more load on the servers.
The DBA's are also not Well Qualified Genious Programmers who obviously
know much more than mere DBA's about what the real use of data is
supposed to be. They don't need no stinkin' SQL! They have the magic
of ORM!
Net result: People talk past one another.
I've nearly been lynched for creating ten-row temporary tables without
prior written permission or even suggesting that we might add indexes
to support more effecient use of views.
I've also nearly strung programmers up for running hundreds of
"select * from table" statements because they weren't willing to
figure out how to write a join or simple filter or give me enough
data to do it for them.
Good example are ORM packages: Most of really hideous code required
for joins, group by, coalesce can easily be pushed into views allowing
simple select X from Y where or one-line joins. DBA's can craft views
that make sense, programmers could use them to simplify their code.
Q: When is the last place you worked where DBAs were pigs (vs.
chickens)?
Q: How often was a DBA involved in the analysis and design of cycle
of reporting?
Q: How many programming projects have you been on where the hackers
were willing to write clear, concise data reqirements and keep
them up to date?
When that happens people will use the databases as tools, ORM code
will be [semi-] maintainable, and we'll live happily for ever after.
--
Steven Lembark 1505 National Ave
Workhorse Computing Rockford, IL 61103
lembark@wrkhors.com<mailto:lembark@wrkhors.com> +1 888 359 3508
"The true measure of a man is how he treats someone who can do him absolutely no good."
- Samuel Johnson