Jump to content

MBAM 3.0 and av-comparatives.org


Recommended Posts

  • 5 months later...
  • Replies 111
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

On ‎12‎/‎16‎/‎2016 at 9:14 PM, exile360 said:

We don't refuse to be tested at all.  In fact, there are currently plans being discussed right now to have official testing done by at least one or more of the testing organizations.

So, any news about "plans being discussed right now to have official testing "

It is more than half a year now.

Thanks!

Link to post
Share on other sites

We've actually just received another CheckmarkCertified certification from West Coast Labs for our remediation.  So we do participate in tests that make sense for the product. 

The biggest problem with other comparative AV testing is that the current test methodologies don’t mimic real-world environments. 

The best illustration of our performance against real-world threats is how we handle actual malware outbreaks.  If you look at two recent examples of widely spreading threats that made the news – WannaCry and Petya/NotPetya—our Malwarebytes Premium protection blocked each of those out of the gate via our signature-less protection.  

So a massive signature database of old, non-active threats is not an indicator of overall protective capability.

While we’re still considering other test options, our primary focus remains on research and staying ahead of the newest threats hitting the scene, just as we’ve always done for years.  This is why we are able to find and clean threats that other AVs with high scores in those comparative tests miss every day.

Link to post
Share on other sites

2 hours ago, bdubrow said:

So a massive signature database of old, non-active threats is not an indicator of overall protective capability.

True, but wouldn't that be a slam dunk for MBAM? Why the hesitancy? 

Each time a video appears on Youtube that shows malware infecting a protected PC, MB's response is "that's not a real world test". Yet MB has yet to demonstrate a "real world test". Why not? For that matter, I could argue that Notepad protects against malware and disclaim infections by asserting "that's not a real world test". 

White papers and theories about attack vectors are just that... words. We want to see an unedited video that clearly demonstrates MBAM's effectiveness, and we want the challenge malware to be accessible to independent testers to ensure that it is real, and not tweaked to trip the detection algorithms used by MBAM.

Link to post
Share on other sites

8 hours ago, bdubrow said:

database of old, non-active threats

AV Test uses , for 0-day malware 202 samples; being zero-days, I doubt they are old, non-active threats

For the rest of detection , they use "widespread and prevalent malware discovered in the last 4 weeks (the AV-TEST reference set)" , so again, no old, non-active threads.

So, in fact what are you talking about?

599621888a108_AVtest.jpg.913979abc46d7e151f1855d719cb02fd.jpg

Edited by lock
Link to post
Share on other sites

26 minutes ago, lock said:

AV Test uses , for 0-day malware 202 samples; being zero-days, I doubt they are old, non-active threats

For the rest of detection , they use "widespread and prevalent malware discovered in the last 4 weeks (the AV-TEST reference set)" , so again, no old, non-active threads.

So, in fact what are you talking about?

599621888a108_AVtest.jpg.913979abc46d7e151f1855d719cb02fd.jpg

You make a good point.  I believe that AV Comparatives uses only web based exploits in its real world tests. rather than a zoo of static malware samples.  So it is a false argument to claim that all the test organizations only rely on static samples for their testing.

Link to post
Share on other sites

9 hours ago, bdubrow said:

So a massive signature database of old, non-active threats is not an indicator of overall protective capability.

This was actually in response to a comment much earlier in the thread about the massive signature databases included in other AV products.  I'm not commenting on the sample set used by test organizations.  Our concerns and continued discussions for participation in these tests are around methodology. 

Bottom line, we may participate at some point in the future, but we've had other competing priorities and thus this has not been a key focus.  

We've also discussed providing videos showing actual detections, and while I don't think we could provide the actual samples (as a matter of policy), we might be able to provide an identifying hash or something for those who wish to recreate the tests. 

We do understand that folks are interested in seeing this sort of "proof" -- completely get that.  We'll see if we can raise the priority on getting something in place.  :)

Link to post
Share on other sites

  • 2 months later...

Hey @WolfRules, I haven't heard anything new on that front yet, but I'll look into it immediately and get back to you.  I'm actually quite anxious myself to see some tests too because I know that some of the new features we've rolled out recently are quite potent, including a new heuristics detection layer that looks very promising.

I'll post back here if I get any news from the team that I can share :) 

Link to post
Share on other sites

OK, I got some more info for you.  While this has not been forgotten and they definitely do still plan to do this, there are some other things that have taken a priority and that's why this hasn't happened yet.  For one thing, they've obviously been focused on fixing certain issues with the product reported by some of our customers (something that always takes priority) and there are also some new technologies under development that they are working on which have diverted resources away from projects like this for the time being.

I wish I could say more, but I'm sure you can understand that there are some things which must remain secret in order to ensure our effectiveness against the latest threats, but just know that this hasn't been overlooked and they do indeed still intend to have these tests done, we just don't know when that will happen right now due to other priorities.

Link to post
Share on other sites

Hello again :)

I thought you might find this interesting.  Do keep in mind that it only shows where our scanner (not even our protection-only components such as our anti-exploit, anti-ransomware or web blocking) has detected threats missed by resident/active AVs.  What's most startling is that these days, we tend to block a LOT of threats/attacks with those other components I mentioned, so I can only imagine what the numbers would look like if they were included, but you can bet they'd be much higher.  Also, as I mentioned previously, we have other new protection components under development right now which will only improve our detection rates.

Link to post
Share on other sites

4 hours ago, exile360 said:

I thought you might find this interesting.  Do keep in mind that it only shows where our scanner (not even our protection-only components such as our anti-exploit, anti-ransomware or web blocking) has detected threats missed by resident/active AVs....

Hi exile360:

The Third-Party Testing & Antivirus Replacement section of your Malwarebytes 3 - Frequently Asked Questions thread states that antivirus replacements like Malwarebytes "utilize signature-less and behavior-based detection technologies to catch the latest and most relevant threats, as opposed to anti-virus programs that rely on large databases of signatures that can quickly become outdated and are typically ineffective against many modern threats".

Am I correct that the "traditional AV technology" discussed that blog entry Traditional AV solutions shown ineffective in real-time global heat map only tested signature-based scanning where the digital signature (e.g., SHA-256 hash) of an unknown file is compared with the white-listed (safe) / black listed (malware) signatures stored in a virus definitions database?

Almost every modern antivirus program that runs in real-time protection mode now includes heuristic behaviour-based detection as an extra layer of protection to supplement "traditional" signature-based scanning.  Even if those tests were run without anti-exploit / anti-ransomware / web-based protection (features now appearing in many modern antivirus programs like Norton Security, Win 10 Windows Defender, etc.), testing Malwarebytes 3.x (signature-based and heuristic behaviour-based scanning) against a "traditional" antivirus (signature-based only) isn't a fair comparison.
----------
32-bit Vista Home Premium SP2 * Firefox ESR v52.4.0 * NS Premium v22.11.0.41 * MB Premium v3.2.2.2018-1.0.212

Link to post
Share on other sites

No, we did not restrict in any way the components used by the AVs.  All of that data is pulled from Malwarebytes users' systems and reflects what Malwarebytes scans detect on systems where each of those AVs is installed (as reported by Security Center/Action Center in Windows) so whatever protections the user of each system has active at the time, including all components of their AVs, whatever those Malwarebytes scans detect were threats that got passed them.

Link to post
Share on other sites

16 hours ago, exile360 said:

No, we did not restrict in any way the components used by the AVs.  All of that data is pulled from Malwarebytes users' systems and reflects what Malwarebytes scans detect on systems where each of those AVs is installed (as reported by Security Center/Action Center in Windows) so whatever protections the user of each system has active at the time, including all components of their AVs, whatever those Malwarebytes scans detect were threats that got passed them.

Hi exile360:

Thank you for the clarification, but your last post raises even more concerns for me about the methodology.

If the Malwarebytes Remediation Map is displaying data in real time every time MB v3 detects a threat that is "missed" by a competitor's AV, could any of those dots represent a false positive detection by MB?  Could the data be skewed by MB users like myself who have data collection disabled (Settings | Application | Usage and Threat Statistics |  Anonymously help fight malware by providing usage and threat statistics) and have Settings | Application | Windows Security Center set to Never register Malwarebytes in the Windows Security Center?  Even the disclaimer below that map states: " This is not a perfect comparison...The data is presented in accordance with the methodology explained above and there is no correlation or interpretation applied to such data."

I also suspect most "traditional" AV manufacturers would argue that running MB v3 in real-time protection mode alongside their software actually decreases the effectiveness their product - see the Norton support article Problems Running Multiple Security Products for one example.

This is why I'd like to see Malwarebytes participate in side-by-side malware detection tests from a independent organizations like AV-TEST or AV-Comparatives - to minimize any bias in the testing methodology and see if MB v3 actually lives up to its claim that it is an "antivirus replacement".
___________________________________

And just an aside, but if users are wondering why I have Usage and Threat Statistics disabled, they should read the Software Collection Addendum of Malwarebytes' Prviacy Policy on "anonymous" data collection - particularly the section on Client Data which reads in part:  ""For this data we identify each system with a unique identifier that is created at install time, so it is possible to track changes to an individual system over time".
----------
32-bit Vista Home Premium SP2 * Firefox ESR v52.4.0 * NS Premium v22.11.0.41 * MB Premium v3.2.2.2018-1.0.212

Link to post
Share on other sites

4 hours ago, lmacri said:

If the Malwarebytes Remediation Map is displaying data in real time every time MB v3 detects a threat that is "missed" by a competitor's AV, could any of those dots represent a false positive detection by MB?  Could the data be skewed by MB users like myself who have data collection disabled (Settings | Application | Usage and Threat Statistics |  Anonymously help fight malware by providing usage and threat statistics) and have Settings | Application | Windows Security Center set to Never register Malwarebytes in the Windows Security Center?  Even the disclaimer below that map states: " This is not a perfect comparison...The data is presented in accordance with the methodology explained above and there is no correlation or interpretation applied to such data."

Yes, of course an FP is possible, but if we were to have any FP common/frequent enough to significantly skew the results, our forums and support systems would be flooded with affected users reporting it.  If it is some obscure FP on a file almost no one has, then it wouldn't be a large enough occurrence to affect the data in any meaningful way.  Besides, if Malwarebytes were that prone to FPs I seriously doubt we'd have the reputation we do or the number of users and customers that we have.  So is an FP showing up on the map a possibility?  Of course, but it's not very likely.

As for disabling data collection, yes, that would affect the data.  It would reduce the number of entries showing up in the map if you've had Malwarebytes detect anything via a scan if you've also got an AV registered with Action Center, so it reduces the data in favor of the AVs.  Choosing whether Malwarebytes registers with Action Center/Security Center wouldn't affect the results.  Most of this data has to be from free users as paid users of Malwarebytes would be far more likely to have any threat blocked/quarantined before a scan is run and the only data being reported on the map is from scans where something malicious has been detected (we aren't including PUPs either).  It's as I said earlier in this thread, were we to include detections from our realtime components, the data would likely grow massively.

As for running MB3 in realtime, again, most of this data (if not all of it) must be coming from free users of Malwarebytes.  It happens far less frequently that our scans detect anything when our realtime components are active because they tend to prevent anything from getting onto the system which a scan would detect.

We will have side-by-side testing, but the methodology must be realistic otherwise the results would be useless.  The vast majority of testing I've seen was anything but realistic unfortunately.  Our data on this map is quite basic.  In order for something to show up, the user must have one or more AVs registered with Security Center/Action Center and MB3 must detect at least 1 malicious item on the system via a scan, meaning we don't count any of the data coming from paid/trial MB3 users where any of our realtime components has blocked/quarantined a threat of any kind.  So we aren't even comparing our paid product to the AVs here, it literally compares our scan-only database to the resident AV(s), meaning a LOT of what we claim makes MB3 a sufficient AV replacement product is not being included in this data set at all.  We're basically crippling ourselves by the requirement that an item must be hit by our scanner and must not be a PUP because we know our realtime protection is far more likely to stop an attack/threat long before our scanner will have a chance to see it.  The way we're reporting this data gives the AVs a huge advantage over us, so if they were proficient at providing 'complete' protection, the numbers on the map should be far lower or even non-existent for the 'top' AVs that tend to do well in the various comparative tests.  Yet I see those top AVs on the map plenty of times right along with all the others.

Link to post
Share on other sites

Based on the data, I would think that MB should be considering a partnership with an established AV company.  That would allow for a multi-level protection product, that in the real world, could be an actual replacement for an AV suite.   This would be in recognition of the understanding that traditional AV products can let a lot of today's exploits slip past the defensive perimeter.  But IMHO, you should still use an active signature based AV scanner running in real-time.  Obviously, that is not enough to catch all of today's malware threats, but combining the two approaches in a layered defense makes the most sense.

Link to post
Share on other sites

That's the thing, we believe based on our testing and data that between our heuristics and signatures, our behavior based layers of protection and our blocking of known malicious servers that we are providing a layered defense capable of stopping every threat that any traditional AV database could and more.  Partnering with/integrating a third party AV is something we've considered many times, but each time the idea comes up our Research and Dev teams prove to us through real world data as well as basic logic why this would be a mistake, especially now that the vast majority of threats are not only polymorphic, but even to the extent that two malicious binaries downloaded from the same source on the same day are frequently different.  This renders traditional signatures pretty much useless because real world threats change far too frequently for any reactive approach to be an effective defensive layer.  You can overcome some of this polymorphism through the use of heuristics, but the bad guys have caught on there as well and are now publishing threats which will deliberately vary themselves to such a degree that most heuristics will miss subsequent threats from the same family/vendor within a very short span of time, sometimes even from one download to the next for the same URL.  What they cannot change is the behavior of malware and the techniques used by threats to infect a system and accomplish whatever their malicious purpose is, be it to download other threats, to open a backdoor into the system, to capture keystrokes/passwords and other sensitive data or to extort money from victims via file encryption and/or lockout from major functions of the system.  Honestly though, this data really only shows how proficient our database, heuristics and anomaly detection components are as compared to the AVs because again, it is not showing what our realtime protection components are blocking.

Do keep in mind also though that we are not trying to say that these AVs are useless.  It only shows that they aren't necessarily the only protection needed and that at least adding Malwarebytes to the equation will increase greatly the level of security provided to the user's system and data.  You see, since the beginning we've been fighting an uphill battle against the traditional way of thinking that an AV is all that a system requires to stay safe online.  We feel this data backs our claims that there is more to the story than just good test results and we are using real world data to prove it, not some controlled laboratory test using a finite set of aged malware samples or direct links to malicious binaries skipping several phases of the attack chain (one of the greatest flaws with most tests I feel).  I hope that eventually we publish data on our protection components as well in order to show how many more threats and potential attacks we stop, even with a top AV installed, but for now we at least have realtime data showing that even our free scanner still has significant value to offer AV users regardless of the AV solution they've chosen.

Link to post
Share on other sites

I agree with the arguments for a behavior based defensive layer, and the limitations of traditional signature based approaches. 

But I think the claim that this is all you need today has created a credibility gap for the product (based on comments I read in public forums).

While your scientific data may prove this point to be correct, there is apparently a majority public opinion that has not yet shifted over to this viewpoint.  This is not yet perceived to be a proven approach, and most security minded folks generally choose to err on the side of caution.

I use multiple layers myself, including a good AV.

Link to post
Share on other sites

6 hours ago, Tinstaafl said:

I agree with the arguments for a behavior based defensive layer, and the limitations of traditional signature based approaches. 

But I think the claim that this is all you need today has created a credibility gap for the product (based on comments I read in public forums).

While your scientific data may prove this point to be correct, there is apparently a majority public opinion that has not yet shifted over to this viewpoint.  This is not yet perceived to be a proven approach, and most security minded folks generally choose to err on the side of caution.

I use multiple layers myself, including a good AV.

You are absolutely correct, and in fact, your use of a layered approach to defense (including an AV) is one we actually do endorse.  While we do claim that our software is now capable of replacing a user's antivirus, we do not tell our customers that they should uninstall theirs or cease using one.  There is still value in a second opinion as well as a separate layer of defense in case one fails. For example, while we always hope that it won't happen, it is always a possibility that something might slip passed us or that our software could somehow become disabled.  In either scenario, if you have an active AV it is possible that it might catch the threat and/or continue to function so that you aren't left defenseless in that scenario.  This is also why, by default, we do not uninstall any installed AV or prompt our customers to do so via any sort of notifications or claims of 'incompatibility' and why we still design our products to function in realtime alongside other layers of protection, including antivirus.  We do feel it is best to play it safe and that including an AV in the equation can only increase your odds of staying safe.

I said once before in another thread that while we do feel we finally have a product capable of replacing a user's AV, we do not believe that users should not continue to run one if they wish to and that is still what I believe.  In fact, even on my own system after installing Malwarebytes 3 I left Windows Defender enabled because it's light on resources and doesn't conflict with Malwarebytes.  While it hasn't caught or blocked anything missed by Malwarebytes 3 so far, there is no harm in leaving it there as a secondary layer of protection.

Link to post
Share on other sites

  • 3 months later...

I did find some reports that I have included below where Malwarebytes was tested by independent IT security research company MRG-Effitas and I have to be quite honest, Malwarebytes didn't perform very well at all.  Actually, Malwarebytes was near the bottom of the pack in every test when you compare it to HitmanPro and Zemana Anti-Malware and most of the others.

Could this be the reason Malwarebytes doesn't participate with the testing labs I have listed below? 

These past results are terrible in my view and MRG-Effitas looks like a pretty credible source if you ask me https://www.mrg-effitas.com/about-us/about-mrg-effitas . The only test I actually see Malwarebytes participate is a Checkmark Certified test performed by http://www.westcoastlabs.com where Malwarebytes doesn't get compared side by side to any other product as you can clearly see here http://www.checkmarkcertified.com/wp-content/uploads/2017/08/checkmark_report_remed_malbytes_1.0.pdf

This is MRG Effitas' latest quarterly 360 Assessment report:  

https://www.mrg-effitas.com/wp-content/uploads/2017/12/MRG_Effitas_360_Assessment_2017_Q3-1.pdf 

Rootkit Remediation test:

https://www.mrg-effitas.com/wp-content/uploads/2015/07/In-the-wild-Rootkit-Remediation-Comparative-Analysis-2015-Q3.pdf

Ransomware Detection test:

https://www.mrg-effitas.com/wp-content/uploads/2016/07/Zemana_ransomware_detection.pdf

List of Testing Labs:

http://www.av-comparatives.org   

http://www.av-test.org

https://www.mrg-effitas.com

https://selabs.uk

http://www.virusbulletin.com  

Edited by Weston1973
Link to post
Share on other sites

1 hour ago, Weston1973 said:

I did find some reports that I have included below where Malwarebytes was tested by independent IT security research company MRG-Effitas and I have to be quite honest, Malwarebytes didn't perform very well at all.  Actually, Malwarebytes was near the bottom of the pack in every test when you compare it to HitmanPro and Zemana Anti-Malware and most of the others.

Well that MRG report confirmed my thoughts on Avira, Bitdefender, and Kaspersky being the best 1st line defense against malware.  I use Avira.

I also feel better knowing that I use HitmanPro as a 2nd line layered defense scanner, in addition to Malwarebytes and Zemana.   :)

Link to post
Share on other sites

Could this be the reason Malwarebytes doesn't participate with the testing labs I have listed below?

Nope, the primary reason is due to the fact that they seldom (if ever) actually replicate true real world attacks (even though they claim otherwise), but that is neither here nor there.  Besides, it is my understanding that many of these testing organizations expect vendors to pay them for their tests, which means it is entirely possible that the samples and methods they choose may be designed to deliberately favor one product over others.

Also, if you notice, the second and third tests you linked to pre-date Malwarebytes 3, which was when Malwarebytes initially began marketing their product as an AV replacement (due to the integration and advancement of several new protection layers).  The first test, and the only one which occurred during a relevant time frame, only tested the free/on-demand scanning version of Malwarebytes which only utilizes the much less effective signature based components of Malwarebytes (standard malware detection signatures and heuristics; none of the more advanced signature-less components which stop attacks much earlier in the attack chain, prior to any malicious binary actually reaching the target endpoint).

If you want some true real-world data, take a look at this.  That isn't even static content like the PDFs you linked to; it's real-world data from right this second.  Even better, it ONLY shows where Malwarebytes' scan engine (meaning the on-demand scanner that did so poorly in those comparative tests you linked to) has detected malware (PUPs are excluded) on a system where each listed AV is present (it also does not include any data from any of Malwarebytes' realtime protection components such as Web Protection, Exploit Protection or Ransomware Protection).  In my opinion that shines quite a different light on these tests, especially since I pretty much know that any time I visit that page I'll no doubt find a long list of highly rated AV products missing actual threats that an on-demand scan by Malwarebytes 3 has detected.

Regardless, the last I heard from the team, they actually do intend to participate in some comparison tests, however I suspect they haven't yet due to other work that they're doing on the product right now (you might notice they've had quite a few releases lately, both for bug fixes as well as new features) but I do look forward to the time when they finally do get some comparative testing done because I'm anxious to see the results myself.

Edited by exile360
Link to post
Share on other sites

I will agree that in theory the Malwarebytes team has some advanced technology that may not be readily tested with current methodologies.

But unfortunately, the naysayers may be getting the upper hand lately due to the poor "optics" regarding the reluctance of the team to participate and shed these doubts.  I would think that making this happen should become a priority effort from the executive offices down to the front line.  :excl:

Link to post
Share on other sites

Am I to believe independent lab results from an INDEPENDENT 3rd party or the company who develops the product?  Who will be unbiased and impartial here?  Cmon now, I will admit that some of these lab results I posted was when Malwarebytes was in version 2.0 and not the "latest and greatest" newest version 3.0 you are on now but when you consistently are in last place (I have reviewed every 360 Assessment quarterly report since 2016 from MRG Effitas) when compared to HitmanPro (which they still call SurfRight in their reports) and Zemana Anti-Malware in the MRG Effitas' initial detection rates of the on-demand security product tests.  Yet according to Malwarebytes the reason is because these labs told don't "seldom (if ever) actually replicate true real world attacks (even though they claim otherwise)".

I am getting the same rhetoric and similar excuses from Webroot, another company that doesn't really participate in lab results other than this Passmark organization that I never heard of testing them here https://www.passmark.com/reports/Webroot_SecureAnywhere_Business_vs_competitors_2017_Edition_1.pdf

I want to see Malwarebytes and for that matter Webroot too in a test that you approve of being compared next to HitmanPro, Panda, Bitdefender, Kaspersky, Norton (Symantec), Zemana, Avira, and other top tier security products and let's see how well you''ll really do. 

SHOW ME PROOF that you actually make a competitive product when compared to your competition?

I would love to see Malwarebytes in this AV-Comparatives Real World Protection test below:

http://chart.av-comparatives.org/chart1.php?chart=chart2&year=2017&month=Jul_Nov&sort=1&zoom=2

Edited by Weston1973
Link to post
Share on other sites

18 minutes ago, Tinstaafl said:

I will agree that in theory the Malwarebytes team has some advanced technology that may not be readily tested with current methodologies.

But unfortunately, the naysayers may be getting the upper hand lately due to the poor "optics" regarding the reluctance of the team to participate and shed these doubts.  I would think that making this happen should become a priority effort from the executive offices down to the front line.  :excl:

I suspect that if it had a greater impact on sales than it currently seems to, they would, not that money is their primary motivator, but they do want to provide their products in a way that both appeals to as well as meets the needs and expectations of their customers/potential buyers, so if the majority of users were relying on these kinds of comparative tests to make their final buying decisions as I believe they did at one time (though that was some years ago), then I'm confident it would be a higher priority for them than it currently seems to be.

That said, yeah, you're right, it isn't easy to test Malwarebytes (or many of the current AV/AM products) accurately.  Malwarebytes isn't the only product not being adequately measured for performance as a great many vendors these days use some quite similar protection technologies to some of what Malwarebytes currently offers and I don't believe any of them are being adequately tested.  I believe the entire testing industry needs to evolve far more than they have in the past several years now that it has become far more common for security products to utilize a layered approach to protection and have begun to rely far less on raw file detection and malicious file binary hash identification techniques (techniques which are virtually useless against any modern threat as the vast majority are now polymorphic, often changing from one download to the next).

edit: By the way, I went ahead and visited the Malwarebytes heatmap I linked to in my previous post when writing up my last reply and from then to now, here's what it shows so far (remember, this data is live so it only started counting from the moment I clicked the link to visit the page myself and doesn't include any older data from any previous days/times):

Quote

Microsoft Consumer    1,370
Avast    333
AVG    136
ESET    108
McAfee Consumer    93
Norton    74
Kaspersky    72
Avira    48
Sophos    43
McAfee Corporate    29
Bitdefender    27
Symantec    25
Panda    24
Qihoo    23
Trend Micro Corporate    16
Microsoft Corporate    14
Webroot    12

Also note that I only included AVs which had missed 10 or more threats in the time I had the page open.

Edited by exile360
Link to post
Share on other sites

Guest
This topic is now closed to further replies.
  • Recently Browsing   0 members

    • No registered users viewing this page.

Back to top
×
×
  • Create New...

Important Information

This site uses cookies - We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.