How We Use Agents to Detect Rented Gig-Worker Accounts

Image

Roham Mehrabi

Use Case

Image

Incident

  • The February 2025 Wilbraham, MA Uber Eats assault was committed by a man using a rented account registered to a woman.

  • 1 in 4 gig workers has rented or sold their verified account; this figure rises to nearly 1 in 3 for Millennial and Gen Z drivers.

  • Standard KYC and background checks at signup miss this completely. The problem is a failure of continuous identity verification, not onboarding.

  • Our agents trace both the real account holder and the likely renter by cross-referencing a single identifier against breach data, social graphs, and public records.

The man who assaulted a woman after delivering her Uber Eats order rented his account for $65 from a Facebook group. The app told the victim a woman was delivering her food. A man showed up instead. This is not an isolated incident; it is a systemic failure of identity verification.

The Gig Economy's Continuous Identity Problem

The case in Wilbraham, Massachusetts, is a brutal headline, but the underlying mechanism is happening at scale on every major platform. Our internal analysis and public surveys show that one in four gig workers has rented or sold their verified account. For younger drivers, millennials, and Gen Z, that number is closer to one in three.

This is not a problem that can be solved with better onboarding. Every major gig platform, from Uber to DoorDash to Instacart, runs background checks. Teams at identity verification companies like Checkr have made this process incredibly efficient. The accounts are legitimate. The people who passed the background check are real.

The issue is what happens *after* verification. The problem is continuous identity. A verified account does not mean a verified person is completing every trip. These accounts are openly traded in private Facebook groups and Telegram channels for anywhere from $65 to over $400 a month. The platform recognizes a valid account ID, but the trust-and-safety team is unaware of the transaction that put an unvetted person behind the wheel.

Our founding engineer Hashim Khawaja who built our cross-platform resolution models noted, "The platforms see a verified account ID. They don't see the Telegram group where that ID was sold, or the breach record that shows the real owner's other known associates. The data exists, just not inside their walls."

This creates a massive gap. The person who passed the background check is not the person interacting with customers. For T&S and identity teams, the core challenge is bridging the gap between the static, verified identity at signup and the dynamic, operational identity on every single transaction.

How We Trace the Real Person Behind a Rented Account

When an account is flagged for suspicious behavior, the platform's internal data is often insufficient to determine if it is rented. You see a login from a new device, but that could be a new phone. You see activity in a different neighborhood, but the person could have moved. To get ground truth, you have to look outside the platform's walls. We do this by starting with a single identifier and mapping the entire network around it.

Here is a typical, anonymized case.

The Input: A Flagged Account

An investigation starts with a driver account for “Sarah L.” The account passed its background check nine months ago. It is now flagged for a sudden spike in negative customer reviews and an unusual work pattern, with trips running 18 hours a day.

The only hard data point the platform has is the phone number associated with Sarah’s account. This is the input our agent receives.

The First Trace: Anchoring the Real Sarah L.

Our agent takes the phone number and cross-references it against our proprietary identity graph, historical breach data, and public social profiles. Within seconds, a clear picture of the real Sarah L. emerges. The phone number is linked to:

  • A LinkedIn profile for a woman with that name and location.

  • A GitHub account with commits from three years ago.

  • User profiles on several forums, all using a consistent username pattern.

  • Mentions in the 2021 `Canva` and 2019 `MyFitnessPal` data breaches, which link her phone number to two specific email addresses.

This cluster of correlated data points gives us a high-confidence anchor for the real, verified account holder. We know who she is, digitally.

The Second Trace: Surfacing the Renter

Next, we analyze the operational signals from the platform, which point to a second, unseen person. The device ID logging trips is for an Android model Sarah has never used. The trip patterns originate from a location 60 miles away from her known address. The active hours do not align with her public social media posts.

Our agent takes the new device signals and any secondary emails or usernames associated with the recent activity. It searches for these identifiers across dark web marketplaces and underground forums. The agent finds a user on a Russian-language forum advertising gig-work accounts who uses a username containing a fragment of one of Sarah L.’s old, breached passwords. This user’s profile also mentions activity in the same city 60 miles away where the trips are originating.

The Result: A Two-Person Graph

What the agent returns is not a simple “yes” or “no.” It delivers a graph that maps two distinct entities. The first is Sarah L., the verified owner. The second is a high-confidence profile of the likely renter, linked only by their shared use of the gig-platform account. The trust-and-safety team can now see the full picture and take precise action, disabling the account and reporting the unvetted driver.

How Identity Resolution Differs From Standard KYC

Traditional KYC and identity verification are built to answer one question: is this person who they say they are *at this moment*? Our agents are built for a different problem: who is this person, *everywhere*?

Here is the five-step mechanism.

1. The Input: The agent takes a single flagged identifier. This can be an email address, phone number, username, marketplace seller ID, crypto wallet address, or name.

2. The Search: The agent cross-references that identifier across a wide range of surfaces. This includes Sixtyfour's proprietary databases of historical identity data, dark web sources and underground marketplace mentions, breach records (compromised emails, phone numbers, usernames, password reuse patterns), social profiles across platforms (LinkedIn, GitHub, Reddit, Discord and Telegram memberships, Steam, etc.), and clear-web public records (LLC formation registries, registered-agent records, court filings, marketplace seller pages, news mentions).

3. The Resolution: This is where reasoning comes in. An LLM-driven inference layer weighs the quality and context of signals from different surfaces. It understands that a username in a 2018 data breach linked to a phone number is a strong connection. It can differentiate between two people with the same name by analyzing their separate social graphs. It then maps these resolved entities into a single, unified graph.

4. The Output: The agent returns a graph of connected accounts and identities. It shows the platforms each identity appears on, the specific identifiers that link them, and a confidence score for each connection. The fraud analyst gets a complete picture of the network, not just a single data point.

5. What We Don't Do: This process is critical to understand. The agents do not do IP-based identification. They do not track device IPs, use device fingerprinting, or rely on behavioral session signals from inside platforms. They do not access private platform data, do not bypass authentication, and do not make legal determinations about identity. The mechanism is OSINT-style cross-referencing across public, breached, and proprietary sources, not telemetry.

Validating the Link Between Renter and Account Owner

Surfacing a potential renter is one thing; validating the connection with high confidence is another. We build this confidence by looking for corroborating signals that are invisible to any single platform.

For example, we often find the real account owner posting on Reddit or in a Facebook group asking, “Has anyone else been suspended from DoorDash for no reason?” This happens just as the renter, using their account, triggers a fraud alert. The two activities, seen together, confirm the account rental hypothesis.

In a recent analysis of 1,000 flagged gig-worker accounts, our agents found that in over 85% of cases where renting was confirmed, the renter's primary phone or email had appeared in at least one prior data breach. This created a traceable digital footprint completely separate from the verified account owner. This historical data is the key to unmasking operators who have no official connection to the account they are using.

What This Means for Trust and Safety Teams

For any leader in trust, safety, or identity, this pattern requires a shift in thinking. The old model of a single, heavy verification event at onboarding is no longer sufficient.

First, teams must move from a mindset of “trust at onboarding” to one of “continuous trust.” Identity is not static. It must be re-evaluated, even passively, throughout the user lifecycle. This is the same challenge AML teams at firms like Hummingbird or Airwallex face with mule accounts.

Second, the strongest signals of account renting are mismatches. Look for discrepancies between the verified identity (the person in the KYC documents) and the operational identity (the device, location, and behavior patterns of the active user). Where these two diverge, an investigation is warranted.

Finally, remember that a single identifier is enough to start. You do not need a full profile to begin mapping a network. A customer complaint containing a username, a suspicious email address, or a phone number from a support call can be the thread that unravels an entire fraudulent operation.

The platforms verify the account. You have to verify the human on every single trip.

FAQ

How do you detect a rented gig worker account?

We detect rented accounts by using a single identifier (such as an account's phone number) to cross-reference it and build a profile of the real owner. We then compare this with operational signals (such as device ID and location) to surface the likely renter's separate identity.

What is the difference between continuous identity verification and KYC?

KYC (Know Your Customer) is a one-time verification process during onboarding to confirm that a user is who they say they are. Continuous identity verification is the ongoing process of ensuring that the person using the account remains the verified owner in every session or transaction.

Can platforms like Uber or DoorDash see this activity themselves?

Platforms can detect suspicious internal signals, such as a new device or location. However, they typically cannot see the external data, such as breach records or dark web chatter, needed to confirm that two different people are involved.

How does Sixtyfour trace the real person behind an account?

Our agents use an OSINT-based approach. They take a known identifier and search across public records, social media, breach databases, and other sources to find linked profiles and data points, resolving them into a single, high-confidence identity graph.



See What Our Agents Find

Investigate any person or company right now on the platform.

See What Our Agents Find

Investigate any person or company right now on the platform.