I’m currently evaluating SuiteCRM, having used vTiger and Sugar in the past and looking to potentially implement a paid CRM like Capsule.
My concern (based on experience) with many of the opensource CRM offerings is that they are very temperamental.
After installation of SuiteCRM, I seem to be seeing a lot of MySQL connection problems as follows;
[9157][-none-][FATAL] Retrieving record by id users:1 found Query Failed: SELECT users.* FROM users WHERE users.id = ‘1’ AND users.deleted=0 LIMIT 0,1: MySQL error 2006: MySQL server has gone away
On checking the server max_allowed_packet it is already set to around 500M so it is unlikely to be that.
There is a wait_timeout set to around 60 seconds and ideally, any PHP script should react to this error by attempting to reconnect at least once, rather than just erroring out.
I’m at a loss as to the cause and how to resolve this. Could anybody offer any advice?
I don’t think it’s just a timeout, it’s something deeper that is usually solved either with new hardware (in the rare case of intermittent failure of RAM or network card), or a new installation of MySQL, or some other component in the underlying OS.
BTW if you provide more details about your versions (OS, MySQL, PHP, SuiteCRM) that usually helps narrow things down.
I’ve seen basically every thread on these forums for the past 3 years or so, and everything on our GitHub also. I’ve come across this error some 10 or 15 times, and it was never solved by anything we did on SuiteCRM’s side. People just tweak their systems and their settings until the problem suddenly evaporates.
Note what this is: SuiteCRM asks your DB for a query result; your DB process crashes with its ugliest error. It’s not complaining about a malformed query or something, it is crashing.
That said, we should try and find out ways to take your system to a point where this isn’t happening.
A few things that come to mind for you to check…
These settings in your php.ini:
memory_limit
max_execution_time
See if those are generous enough and restart web server if you change anything.
I know this is an old post however, I faced it too and here is how I fixed it (kinda).
I faced this issue when I was directly querying in my code from a function I called for scheduler. Basically this function was like
select * from x-table where id in ( A very large amount of ids)
I think it malfunctioned when the records to see in was above 1 million.
Anyways shrinking those records and processing them in batches made it work.
P.S. When I faced this problem I did change the max_packet in sql config to 1 TB but that did not work either. Though I do not know how changing ports might have solved the issue.
One more thing. Some one might this issue too when there are alot of records in an array. Like 10 million in an array. Then using the classic array_chunk() function of php is not a good idea. As it would explode the php buffer size. Instead try to implement your own version of array_chunk() with memory cleaning and calling the garbage collector yourself.