Thursday, March 06, 2014
Improving Performance with Zeos and PostgreSql
Last month I was asked if one of our client-server applications can run from a remote site via Internet, sure, why not?, just open a port in your router and point the connection's IP to the external address and done.
A couple of days later, the same customer calls again, telling me that the application is working, but it's slow like a turtle, even on a 2Mb link. Ok, I closed Hacker News and started thinking about how to debug this.
The first thing I did was getting a backup of that customer's database (well, a very old one, because now it's 0.5Tb), and restored on my PostgreSql 9.2 instance. I did this because as I wanted to log everything I just need one client connected, otherwise I'll get mixed queries from different PCs.
To log everything just open your postgresql.conf and set this:
logging_collector = on
log_min_duration_statement = 0
log_statement = all
Save and restart the postgres service, on Debian based systems just do "sudo service postgresql restart". Then start using the application and go to the pg_log directory, by default in /var/lib/postgresql/9.2/main/pg_log, and you'll see postgresql-YYYY-MM-DD_######.log files.
Ok, once logging_collector was set up, I opened my application then went to one option that just listed data (a grid) then double clicked on a row, to open a dialog containing further data, then closed the application. I thought just a couple of queries will be enough for this task, but I was way wrong.
Let me tell you something about this app here, it's an old application started by a small team seven years ago, initially the team was very design patterns focused (remember the time when Java was everywhere?, we used Delphi, but the Java way), after the initial years, whe keep adding features, the team changed over time, and the quality control dissapeared (after the experience I'm writing here things will change).
Going back to the main topic, what I found in the log files was the app was doing not two queries, but forty, yes 40!, and that wasn't the only strange thing I've found in the logs, there were queries we didn't write, and some warnings I must get rid of (I'll explain this at the end of this article).
The process of improving this was first remove the unwanted queries, that look like this:
SELECT
n.nspname,c.relname,a.attname,a.atttypid,a.attnotnull,a.atttypmod,a.attlen,a.attnum,pg_get_expr(def.adbin,
def.adrelid) as adsrc,dsc.description FROM pg_catalog.pg_namespace n JOIN
pg_catalog.pg_class c ON (c.relnamespace = n.oid) JOIN pg_catalog.pg_attribute
a ON (a.attrelid=c.oid) LEFT JOIN pg_catalog.pg_attrdef def ON
(a.attrelid=def.adrelid AND a.attnum = def.adnum) LEFT JOIN
pg_catalog.pg_description dsc ON (c.oid=dsc.objoid AND a.attnum = dsc.objsubid)
LEFT JOIN pg_catalog.pg_class dc ON (dc.oid=dsc.classoid AND
dc.relname='pg_class') LEFT JOIN pg_catalog.pg_namespace dn ON
(dc.relnamespace=dn.oid AND dn.nspname='pg_catalog') WHERE a.attnum > 0 AND
NOT a.attisdropped AND c.relname LIKE E'sysconfig' AND a.attname LIKE E'%'
ORDER BY nspname,relname,attnum
I've asked in the Zeus Forum and was told TZConnection has a UseMetadata property set to true by default, so setting it to False fixed the issue. Now the query count was about 30, a 20% less.
The next step was to look for some repeating patterns in the log file, and I found many queries similar to this:
select varvalue from config where varname='REFRESH_TIME';
select varvalue from config where varname='MAX_DISPLAY_ROWS';
...
As you can see, we store configuration values in the "config" table, and query values on demand. This sounds ok, but is this a smart way of doing it?, config params change in real-time?, surely not, it must be a better way.
Luckily all the configuration was encapsulated in a class, with properties like TConfig.VarValue, so, all the changes to be made where there, inside this class.
The solution to this problem was to mantain a Cache of variables, so we added an internal TCollection that was loaded with all the config when the application starts, then added a LastAccessTime property, containing the last time a variable was accessed, if more than 10 minutes has passed, then we re-load the collection.
This single change removed more than 15 queries, now we are at only 15 queries left to be improved.
Our application has a very neat feature that is per-grid column configuration, this allows an administrator to configure what columns (and it's title, width, etc.) must be displayed. This information was loaded every time the grid is refreshed, once when the window is opened, then every 30 seconds. Usually each screen has one main grid an two or tree subgrids, each with this functionality, all of them are refreshed at the same time. We removed the re-loading at refresh time, eliminating three (for this screen only) queries, now there are 12 queries left.
Some time ago, we implemented "pagination" in our grids, this way we limited the amount of records retrieved from the database, as some queries loaded more than 10k records this was a must have for us. The way we implemented it was this:
1º Having a query, for example "select * from customers", first we needed the total count:
select count(*) from
(select * from customers) as foo;
This allowed us to get the count of any query, very neat at the time.
2º Execute the real query with a Limit and Offset:
select * from customers limit 100 offset 101;
This way, we used two queries to display "XXXX records of YYYYY". This improved a lot compared to what we have initially, but forced us to use two queries.
Since PostgreSql 8.4, a very neat feature called Window Functions was added, allowing impossible tasks with the former versions, one of those tasks is getting the total record count even when a "limit" is imposed, for example:
select
count(*) over() as Total_Count,
c.*
from customers c
limit 100 offset 101;
After removing the old behavior, we removed the count query on each grid, on the screen we used for logging, we have three grids, hence 3 queries were removed. Now only 9 queries were made.
The last step won't remove a query, but alowed us to speed those with Blob fields, an let us get rid of some nasty warnings.
Looking at the log files, we found many Warnings similar to this:
2014-02-25 09:53:57 ART WARNING: nonstandard use of \\ in a string literal at character 42
2014-02-25 09:53:57 ART HINT: Use the escape string syntax for backslashes, e.g., E'\\'.
2014-02-25 09:53:57 ART STATEMENT: update ....'
This tells the database is escaping everything with a backslash, since our blob fields were encoded, they included backslashes. If we get rid of this encoding, the data sent by the wire will be significantly less.
Again, asking in the Zeos Forum, I was told what to do, just add doPreferPrepared to TZQuery objects and done!. This only works if you use parametrized queries, luckily we use them in all of our queries.
The optimization process didn't finish here, but, to not force the customer to wait more time, we branched the app and delivered this quicker version. I must admit, we are very happy with the results. Surely we could have created a web version, but as our customer's budget is thin, and they needed to use the application in no more than two weeks, I think we did the right thing.
A couple of days later, the same customer calls again, telling me that the application is working, but it's slow like a turtle, even on a 2Mb link. Ok, I closed Hacker News and started thinking about how to debug this.
Logging PostgreSql queries
The first thing I did was getting a backup of that customer's database (well, a very old one, because now it's 0.5Tb), and restored on my PostgreSql 9.2 instance. I did this because as I wanted to log everything I just need one client connected, otherwise I'll get mixed queries from different PCs.
To log everything just open your postgresql.conf and set this:
logging_collector = on
log_min_duration_statement = 0
log_statement = all
Save and restart the postgres service, on Debian based systems just do "sudo service postgresql restart". Then start using the application and go to the pg_log directory, by default in /var/lib/postgresql/9.2/main/pg_log, and you'll see postgresql-YYYY-MM-DD_######.log files.
Ok, once logging_collector was set up, I opened my application then went to one option that just listed data (a grid) then double clicked on a row, to open a dialog containing further data, then closed the application. I thought just a couple of queries will be enough for this task, but I was way wrong.
Let me tell you something about this app here, it's an old application started by a small team seven years ago, initially the team was very design patterns focused (remember the time when Java was everywhere?, we used Delphi, but the Java way), after the initial years, whe keep adding features, the team changed over time, and the quality control dissapeared (after the experience I'm writing here things will change).
Going back to the main topic, what I found in the log files was the app was doing not two queries, but forty, yes 40!, and that wasn't the only strange thing I've found in the logs, there were queries we didn't write, and some warnings I must get rid of (I'll explain this at the end of this article).
The process of improving this was first remove the unwanted queries, that look like this:
SELECT
n.nspname,c.relname,a.attname,a.atttypid,a.attnotnull,a.atttypmod,a.attlen,a.attnum,pg_get_expr(def.adbin,
def.adrelid) as adsrc,dsc.description FROM pg_catalog.pg_namespace n JOIN
pg_catalog.pg_class c ON (c.relnamespace = n.oid) JOIN pg_catalog.pg_attribute
a ON (a.attrelid=c.oid) LEFT JOIN pg_catalog.pg_attrdef def ON
(a.attrelid=def.adrelid AND a.attnum = def.adnum) LEFT JOIN
pg_catalog.pg_description dsc ON (c.oid=dsc.objoid AND a.attnum = dsc.objsubid)
LEFT JOIN pg_catalog.pg_class dc ON (dc.oid=dsc.classoid AND
dc.relname='pg_class') LEFT JOIN pg_catalog.pg_namespace dn ON
(dc.relnamespace=dn.oid AND dn.nspname='pg_catalog') WHERE a.attnum > 0 AND
NOT a.attisdropped AND c.relname LIKE E'sysconfig' AND a.attname LIKE E'%'
ORDER BY nspname,relname,attnum
I've asked in the Zeus Forum and was told TZConnection has a UseMetadata property set to true by default, so setting it to False fixed the issue. Now the query count was about 30, a 20% less.
The next step was to look for some repeating patterns in the log file, and I found many queries similar to this:
select varvalue from config where varname='REFRESH_TIME';
select varvalue from config where varname='MAX_DISPLAY_ROWS';
...
As you can see, we store configuration values in the "config" table, and query values on demand. This sounds ok, but is this a smart way of doing it?, config params change in real-time?, surely not, it must be a better way.
Luckily all the configuration was encapsulated in a class, with properties like TConfig.VarValue, so, all the changes to be made where there, inside this class.
The solution to this problem was to mantain a Cache of variables, so we added an internal TCollection that was loaded with all the config when the application starts, then added a LastAccessTime property, containing the last time a variable was accessed, if more than 10 minutes has passed, then we re-load the collection.
This single change removed more than 15 queries, now we are at only 15 queries left to be improved.
Our application has a very neat feature that is per-grid column configuration, this allows an administrator to configure what columns (and it's title, width, etc.) must be displayed. This information was loaded every time the grid is refreshed, once when the window is opened, then every 30 seconds. Usually each screen has one main grid an two or tree subgrids, each with this functionality, all of them are refreshed at the same time. We removed the re-loading at refresh time, eliminating three (for this screen only) queries, now there are 12 queries left.
Some time ago, we implemented "pagination" in our grids, this way we limited the amount of records retrieved from the database, as some queries loaded more than 10k records this was a must have for us. The way we implemented it was this:
1º Having a query, for example "select * from customers", first we needed the total count:
select count(*) from
(select * from customers) as foo;
This allowed us to get the count of any query, very neat at the time.
2º Execute the real query with a Limit and Offset:
select * from customers limit 100 offset 101;
This way, we used two queries to display "XXXX records of YYYYY". This improved a lot compared to what we have initially, but forced us to use two queries.
Since PostgreSql 8.4, a very neat feature called Window Functions was added, allowing impossible tasks with the former versions, one of those tasks is getting the total record count even when a "limit" is imposed, for example:
select
count(*) over() as Total_Count,
c.*
from customers c
limit 100 offset 101;
After removing the old behavior, we removed the count query on each grid, on the screen we used for logging, we have three grids, hence 3 queries were removed. Now only 9 queries were made.
The last step won't remove a query, but alowed us to speed those with Blob fields, an let us get rid of some nasty warnings.
Looking at the log files, we found many Warnings similar to this:
2014-02-25 09:53:57 ART WARNING: nonstandard use of \\ in a string literal at character 42
2014-02-25 09:53:57 ART HINT: Use the escape string syntax for backslashes, e.g., E'\\'.
2014-02-25 09:53:57 ART STATEMENT: update ....'
This tells the database is escaping everything with a backslash, since our blob fields were encoded, they included backslashes. If we get rid of this encoding, the data sent by the wire will be significantly less.
Again, asking in the Zeos Forum, I was told what to do, just add doPreferPrepared to TZQuery objects and done!. This only works if you use parametrized queries, luckily we use them in all of our queries.
Final words
The optimization process didn't finish here, but, to not force the customer to wait more time, we branched the app and delivered this quicker version. I must admit, we are very happy with the results. Surely we could have created a web version, but as our customer's budget is thin, and they needed to use the application in no more than two weeks, I think we did the right thing.
Labels: Lazarus Delphi PostgreSql OpenSource