Large DB and performance - slow loading for tables with lots of records

Hello,

We are experiencing very slow loading for Cases module – we are going through our troubleshooting and the forums. We are looking to possible code/config issues - or just the fact that it is a large database - approx. 1gig.

I was more or less looking to get a discussion going among developers and users alike around the issue of performance and optimization.

As the project grows and more and more enterprise class organizations adopt the application - we will need to ensure that it can be scaled.

Can anyone share experience with how they deal with slow loading for modules with large number of records.

IN our case the Cases module has been particularly slow - the support department relies heavily on the Cases module on a daily basis- and archives email for every conversation.

I am interested in this discussion, it’s a good idea to start it.

There have been a few cases of bad performance here in the forums lately that were caused by excessive unnecessary records in the database. These turned out to be either items with “deleted” field set to “1”, or some records related to workflow logs.

Of course, these were unnecessary records, which is very different from real, useful data.

I don’t know much did you investigate already, but here’s a fancy query to get a view of your database sizes:

SELECT CONCAT(table_schema, '.', table_name),
       CONCAT(ROUND(table_rows / 1000000, 2), 'M')                                    rows,
       CONCAT(ROUND(data_length / ( 1024 * 1024 * 1024 ), 2), 'G')                    DATA,
       CONCAT(ROUND(index_length / ( 1024 * 1024 * 1024 ), 2), 'G')                   idx,
       CONCAT(ROUND(( data_length + index_length ) / ( 1024 * 1024 * 1024 ), 2), 'G') total_size,
       ROUND(index_length / data_length, 2)                                           idxfrac
FROM   information_schema.TABLES
ORDER  BY data_length + index_length DESC
LIMIT  10;

There are a few actions you can take without much risk:

  • really deleting rows that have “deleted=1”

  • deleting rows in relationship tables that point to already deleted records (can be many in tables like securitygroups_records, etc). I have a fancy query for this also… somewhere.

  • turn on log slow queries option and check the logs to see which queries are taking longest.