Jump to content

Malwarebytes EP mbam.service High CPU usaage


Terrence

Recommended Posts

I just ran into this issue on a workstation. I think this is happening to a lot of my workstations though because a lot of my users have been complaining their computers are very slow now and crash frequently.

I opened up the Resource Monitor and found MBAMSERVICE was writing to the harddrive at ~10mb/s and using 47% of the CPU all the time, this machine kept it up for an hour while I was watching.

This folder: C:\ProgramData\Malwarebytes\MBAMService\logs

Has 1 file called MBAMSERVICE.LOG and 10 MBAMSERVICE.LOG.bk1 bk2 bk3 bk4 etc. each file is 10MB and is being updated/overwritten per minute.

When I opened the log file, every single entry every single log file are these 4 lines repeated twice per second:

10/19/17    " 09:30:32.523"    61740422    08ac    0940    ERROR    CHAMCTRL    CControlWatchdogDriver::GetRefCount    "ControlWatchdogDriver.cpp"    305    "GetRefCount (err = 2) = 4294967295"
10/19/17    " 09:30:32.524"    61740422    08ac    0940    ERROR    CHAMCTRL    CControlWatchdogDriver::DecrementRefCount    "ControlWatchdogDriver.cpp"    272    "Error getting driver RefCount - 2"
10/19/17    " 09:30:32.524"    61740422    08ac    0940    ERROR    CHAMCTRL    CControlWatchdogDriver::Remove    "ControlWatchdogDriver.cpp"    370    "Failed to remove reference"

10/19/17    " 09:30:32.524"    61740422    08ac    0940    ERROR    SPSDK    Uninstall    "SelfProtectionUser.cpp"    182    "SelfProtection driver failed to uninstall. LE=0."

 

Each file only lasts about 3 seconds but there are almost 10 entries per millisecond, so my MBAMSERVICE.LOG file goes from 9:30:32.523 - 9:30:35.265 and has hundreds of lines in it. Each of these log files is the same.

Not all my endpoints are doing this, but at least one is and given the number of people complaining about slow computers, I'd hazard a guess at least 25% of my endpoints have this problem.

Link to post
Share on other sites

  • 1 month later...

I have recently implemented a fix/workaround for this a few weeks ago which has resolved every single MBEP issue I've ever had since purchasing the product last September.  This includes endpoints randomly dropping off the cloud console and also causing interference with our regular AV and other unrelated house software applications (which were white-listed).  None of the suggestions from the support staff has ever worked for us, including those listed above. Since implementing my own workaround, our MBEP struggles have completely stopped.

The issue apparently is caused by a persistent memory leak in the MBAMService.exe process.  Upon endpoint startup, the memory used (in Task Manager) on all our PCs starts at around 250,000 K.  When left unattended, that memory usage will slowly creep higher and higher.  After a few days to a week, that memory rises on our endpoints to 400,000 K to 500,000 K at which point we start having issues with our other software.  Also, at this point, we start to see our first set of endpoints disappear from the cloud console.  More and more endpoints begin to disappear as the hours & days pass.  If continued to be left unattended, the memory rises to between 500,000 K and 1,000,000 K.  At this level, MBAMService.exe CPU usage rises and gets stuck well above zero.  On our fastest machines, the MBAMService.exe CPU will run steady at 13%, just enough for our users to notice performance hesitations. On our slowest machines, it will run steady at 50% in which the machines become basically crippled.  To stop this, the endpoint must either A) be restarted, or B) the MBAMService.exe process must be manually killed and restarted.  After doing this, everything calms down and starts working again.  Everything.

So after months of endless frustration, I ended up writing a .cmd script which stops the cloud service (MBCloudEA.exe), forcibly kills the MBAMService.exe process, restarts MBAMService.exe and finally restarts the MBCloudEA.exe service.  I put a copy of this .cmd script on the endpoints and set Task Scheduler to execute it (as Local System) at 6:00AM every morning.  Ever since implementing this workaround, all our MBEP problems have completely vanished and I haven't looked back since.

If interested, here's the script I'm using:

 

:: Reset Malwarebytes Endpoint Protection Services

ECHO OFF

NET STOP MBEndpointAgent
TASKKILL /IM MBAMService.exe /F

TIMEOUT /t 10 /nobreak

NET START MBAMService
NET START MBEndpointAgent

 

It's incredibly simple. I chose to "kill" the MBAMService.exe process rather than stop the service because if the high CPU usage gets stuck, it has a problem simply stopping the service.  Upon killing the process, MBAMService.exe re-executes itself, but I manually start it again in case it doesn't.  Restarting with it already running doesn't harm anything.  Also, if the issue is allowed to get bad, the Cloud Service has problems and sometimes stops on its own when MBAMSercvice.exe restarts. That's why I stop and restart the Cloud Service at the beginning and end of the script.

 

Edited by CHall
Added last line of the post
Link to post
Share on other sites

  • 1 month later...

Glad to have found this because I just got a "low on memory" message from Windows and found this:

Unfortunately,  I got the following on the taskkill statement

ERROR: The process "MBAMService.exe" with PID 608 could not be terminated.
Reason: Access is denied.

Not happy that this suddenly cropped up...

mbamissue.JPG

 

Edited to say, that after killing the service, which launched almost immediately, MEP is acting like PacMan and chewing up memory again.

Now, nothing's working to kill this.


I disabled the scan in the policy - but because I don't know what the polling time is to the client, I can't tell if it is enabled.

And there's nothing on the client to do anything at all to try to "pull" the policy down.

Time to reconsider this offering...

Edited by kahml
This is a mess!
Link to post
Share on other sites

MBAMService just crashed 26 of my servers.  All high memory usage.  I had to work through getting them back online, disabling the service, then killing the process.  This is the only way I could get it to stop crashing the servers.  If you killed the process it would restart and take all the memory again.  I mean ALL of it.  I've had software with memory leaks before, but I've never seen anything literally leave 50MB of 64GB before.

I will be removing this from all my servers and will not be reinstalling as I just spent 3 hours on my Saturday trying to get things functional again.

Rob

Link to post
Share on other sites

  • Staff

We're sorry you had issues with our program today. We've addressed the issue and here's what you need to do to fix it

Malwarebytes Endpoint Protection (cloud)

  1. From the Malwarebytes Cloud console, go to the endpoints pane and select all the endpoints.
  2. In the action drop-down, choose the ‘check for protection updates’ option to force an update on all endpoints to database update 1.0.3803 or higher.

This should fix the problem for the vast majority of Endpoint Protection endpoints. If endpoints are still affected after applying this, please reboot the machine.

If the remote agent is unable to reach out and get this update, then we must disable the web protection:

  1. In the Malwarebytes Cloud console, Go to the settings> policies> and open up the policy the clients are on.
  2. From here, go to the endpoint protection policy and turn off the “Web Protection” portion of the policy. Then:
    • If the machine is unresponsive, reboot the machine and log in.
    • Once in, right click on the tray icon and start a scan. This will force a database update and fix the issue.
    • Once updated, cancel the scan and reboot the machine.
  3. When the computers are all online and updated, please turn back on the web protection again in the Endpoint Policy.

To learn more about what happened, please go here: 

 

Link to post
Share on other sites

15 hours ago, King_Of_The_Castle said:

Surprisingly none of my 126 clients suffered from this weekend catastrophe, Is it too early to call a victory?

As soon as they realized the issue they stopped their update server so your endpoints probably got lucky and didn't receive the malformed update. Mine were not so lucky. Like many other i spent my weekend chasing my tail trying to keep our dispatch and server room online

Edited by TonyCummins
grammar
Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
Back to top
×
×
  • Create New...

Important Information

This site uses cookies - We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.