Web Services performance not always reliable, need suggestions

I’ve been using SuiteCRM for around two years now and have done a lot of integrating it with other platforms such as Drupal, and stand alone forms.
There’s been one persistent issue that I haven’t been able to figure out. Currently running version 7.8.0

For some reason, occasionally web services using REST (set_entry,set_relationship, get_entry) fail. I did some debugging and I can see that the request are always sent, and it never has issues creating sessions. What I did notice is that in these instances, the response from the my SuiteCRM site is blank. I’m at a loss on what to look at since it happens occasionally but it’s starting causing a problem the more that I use it and the more its growing. I can’t use pure MYSQLi because a lot of my modules which are being saved to require logic hooks to run. I’ve also to run into some other issues like creating a relationship in studio requires me to do a rebuild relationships after or sometimes when I save a field it returns “undefined”.

Has this happened to anyone else? Is there something I can look at doing? Could it be that my suiteCRM server needs more resources? My web server which only has the suiteCRM install has 8GB of ram, and my database on a separate server has 6GB.

Have a look at this post, and run that first query you find there:

https://pgorod.github.io/Database-tables-size/

This will give us a quick glimpse of any overgrown tables in your database. Maybe the API failures are simply a consequence of slow queries.

Note that there is a helpful option to “log slow queries” (search Google).

If we come to the conclusion that there is actually a problem with the API, note that there is a new one (API v8) on the newest 7.10 versions.

Hello, I ran that query and this was the result:

fusion.campaign_log 0.339M 0.162G 0.169G 0.331G 1.040
fusion.emails_beans 0.287M 0.145G 0.094G 0.239G 0.643
fusion.tracker 0.284M 0.061G 0.092G 0.153G 1.522
fusion.job_queue 0.113M 0.066G 0.050G 0.116G 0.763
fusion.leads 0.025M 0.013G 0.033G 0.046G 2.489
fusion.aod_indexevent 0.078M 0.025G 0.018G 0.043G 0.709
fusion.emails_text 0.003M 0.040G 0.000G 0.041G 0.010
fusion.prospect_lists_prospects 0.069M 0.013G 0.022G 0.035G 1.635
fusion.cases 0.003M 0.029G 0.001G 0.030G 0.047
fusion.acl_roles_actions 0.037M 0.009G 0.018G 0.028G 1.956

You have lots of clean up to do :slight_smile:

Each table has it’s challenges. If you Google for previous cases of people with large tables problems, you’ll find some solutions here in these forums. There are also tips in that post I linked.

It would be nice to know which one is causing you actual problems, so you could focus on that one.

These are very auxiliary, you shouldn’t need to keep most of it:

fusion.campaign_log 0.339M 0.162G 0.169G 0.331G 1.040
fusion.tracker 0.284M 0.061G 0.092G 0.153G 1.522
fusion.job_queue 0.113M 0.066G 0.050G 0.116G 0.763
fusion.aod_indexevent 0.078M 0.025G 0.018G 0.043G 0.709

The “leads” and “emails” are different, it’s your actual data, but perhaps you have tons of “deleted=1” items in there? Or orphaned items?

Our leads is actually the amount that we have (and that should be there), and it’s been the most problematic module. Most of our custom modules have a relationship to Leads and the lead information is pulled often with our web services. As for deleted records, we only have 8 where deleted=1.

Still, make sure you clean the other auxiliary tables, it will surely make a difference in your general performance.

Use “log slow queries” to determine your critical tables.

And use this kind of queries to examine orphaned relationship with leads and email_beans:

SELECT * -- DELETE ChildTable
FROM contacts_cstm ChildTable    
LEFT JOIN contacts ParentTable 
ON ChildTable.id_c	 = ParentTable.id
WHERE ParentTable.id IS NULL

The first line is a SELECT, which you can then change to a DELETE as shown in the comment (the part after "-- " is a comment). Obviously, be very careful and have backups before you do any DELETEs directly in the database.

You only have to change the table names, in this case “contacts” and “contacts.cstm”, and the fileds in the ON condition, nothing else.

With this you can list records that are not linked to anything, or clean entries from relationship tables that don’t have the corresponding target records anymore (possibly deleted).

Even if I do manage to clean some items, my database will always grow. For auditing purposes we can’t remove a lot of our module data for at least 5-7 years.
Are there any options? Are there service side resources we should look into such as more ram or an SSD drive that will this help or are we still limited by the capabilities of SuiteCRM itself? Are the new APIs in 7.10 more optimized?

Don’t worry, SuiteCRM can handle a lot more than that. But as things start to grow you will have more work as a sysadmin :slight_smile:

Ok, where can I start…

Hardware:

  • yes, using SSDs sounds like a sensible idea (the last spinning disk I bought, except for backup purposes, was in 2011…)
  • you can scale a bit by splitting database server and web server. Just clone your server, then shut off the web server on one, and shut off the database on the other. Adjust the config.php to connect to the database on the other server. Give them a fast dedicated network connection.

Cleaning up auxiliary tables:

  • as I said, don’t neglect this. Some of these things involve queries that run every time you’re doing something else in your system.
  • aod_indexevent: I think (not sure) that this can be all deleted. This can be tested on a clone. Or you could try a re-indexing to see if it creates a much smaller table.
  • campaign_log: do you need old campaign events stored? Even for Campaigns that have finished? Can you just store some aggregated data elsewhere, like a history of past Campaigns in Excel?
  • job_queue: perfectly safe to delete jobs where status=‘done’ and date older than one month (for example).
  • tracker: if you’re not using it, turn it off. It’s a tool to see which screens your users are visiting, and what they’re up to in the system.

DBA work

  • normally, a good DBA can make any database faster by analyzing queries and adjusting indexes. This can be a good solution for your largest tables with the most data: make sure your API calls queries are going through appropriate indexes.

Move Auditing data offline:

  • the old, stale data, that you just need for auditing purposes, can be stored in separate tables, not mixed with the queries that really matter to you during normal operation.

API v8

  • yes, the v8 API should be better and more performant.
  • remember that your use of the API can probably be optimized. Make sure you’re not getting huge lists, or unnecessarily repeating searches, etc.

Tell me if you need more help. Good luck!