Phishing Isn’t Just Email Anymore: How “The Com” Made Social Engineering the New Front Door
- Ryan Moore
- Feb 10
- 4 min read
Phishing used to be easy to spot: a strange sender, a misspelled domain, a
sketchy link.
That world is gone.

Today’s most successful attacks are built around credibility—and the attackers are often native-English social engineers who know how real organizations operate. In many high-profile incidents, the “hack” isn’t a technical exploit at all. It’s a conversation, a convincing request, or a trusted workflow used against you.
This shift is one reason cybersecurity researchers and law enforcement have been paying attention to a loose, youth-heavy cybercrime ecosystem known as
The Com—short for “the community”—where social engineering is a core skill and reputation is a powerful motivator.
For organizations that handle sensitive information (law firms, healthcare, finance, insurance), understanding this evolution is now part of basic risk management.
What is “The Com” (and why it matters to your business)?
“The Com” isn’t one centralized gang with a logo and hierarchy. It’s better understood as an online ecosystem—a loose network of smaller crews and individuals that collaborate, recruit, and trade tactics across chat platforms and gaming-adjacent communities. Reporting and research consistently describe it as young (often teens and young adults), English-speaking, and heavily focused on social engineering.
Why that matters:
They can sound like your people. Native-English, confident, and fluent in business/IT language makes impersonation attempts feel normal.
They target identity and workflow, not just systems. If an attacker can convince someone to grant access, reset credentials, or bypass a process, “advanced hacking” isn’t necessary.
They scale credibility. Modern phishing is often multi-channel (email + phone + text + portal prompts), built around urgency and real-world context.
This is where “advanced phishing” is heading: less obvious technical compromise, more human-trust compromise.
The real-world cost: what’s been stolen (and what it costs)
When organizations underestimate social engineering, the impact can be immediate and expensive.
A few examples that show the scale of modern, social-engineering-driven incidents:
MGM Resorts reported an approximately $100M impact from a cyber incident (with operational disruption and customer data theft discussed in public reporting and filings).
In a separate, widely reported case tied to social engineering, Caesars Entertainment reportedly paid a $15M ransom after an incident affecting loyalty program member data (as reported in multiple investigations and case reporting).
On the individual/crypto side, a $245M Bitcoin theft was attributed to social engineering, with court reporting describing the theft and follow-on criminal activity.
Those are specific incidents—but the broader trend is even more important:
The FBI Internet Crime Complaint Center (IC3) reported phishing/spoofing as the top complaint type in 2024.
Business email compromise is still one of the most expensive outcomes—measured in billions of dollars annually in reported losses.
In other words: even if a single phishing attempt looks small, the downstream impact (fraud, breach response, downtime, compliance exposure, reputational damage) is anything but.
Why phishing is getting “better” (and harder to detect)
1) It’s more targeted
Attackers build believable context: a vendor name, a department reference, a “quick request,” a real executive’s tone.
2) It’s less link-focused
Many modern incidents don’t require malicious links. The goal is often to:
get a credential reset approved
gain access through a trusted workflow
move laterally once inside
This is one reason social-engineering-heavy groups remain effective even as email filters improve.
3) It’s multi-channel
Email is just one path. Phone-based and chat-based social engineering has become a major driver in high-impact cases, especially where help desks and identity verification are inconsistent.
4) It’s accelerated by automation and AI
Threat research has highlighted how subtle phishing has become—using realistic language, QR-based lures, and compromised brand assets that appear legitimate at a glance.
The biggest misconception: “Our tools will catch it”
Tools help. But tools don’t fix process gaps.
Modern social engineering is built to exploit:
unclear identity verification
over-privileged access
lack of monitoring
busy teams under pressure
“just get it done” culture
That’s why the most effective security upgrades often look boring:
stricter access policies
faster detection
consistent offboarding
identity controls for “high-trust” actions
A practical defense checklist for high-trust organizations
You don’t need paranoia. You need repeatable controls.
1) Make access review non-negotiable
Quarterly access reviews for sensitive systems
Remove unused and stale accounts
Separate admin access from daily use
2) Tighten help desk + password reset verification
Require identity verification steps for account recovery
Add friction to high-risk changes (MFA resets, new device enrollment)
3) Improve detection speed
Alert on unusual login behavior (time, location, device anomalies)
Monitor privilege changes and new admin grants
Track suspicious mailbox rules / forwarding changes
4) Move toward phishing-resistant authentication where possible
Strong MFA configuration is baseline
For high-risk roles, use more resistant methods (hardware-backed / passkey approaches)
5) Train for reality—not theory
Training works best when it reflects what staff actually see:
“vendor invoice change”
“new device / MFA reset request”
“urgent doc share”
“IT needs you to confirm quickly”
And training should be paired with a culture of reporting without blame.
The takeaway
The biggest phishing shift isn’t technical—it’s human.
Groups associated with “The Com” thrive because they understand:
how teams communicate
how urgency overrides caution
how small process gaps become big incidents
For organizations handling sensitive information, the goal isn’t to expect perfect people.
It’s to build systems where one mistake doesn’t become a breach.
If you want an outside set of eyes on access, detection, and identity workflows, Team Moore can help you assess the gaps and prioritize what matters.





Comments