Ring's wifi-enabled video doorbell seems innocuous enough. Originally launched as Doorbot in 2013 before being acquired by Amazon, the brand has grown explosively, now selling roughly 400,000 devices monthly. What many don't realize is that Ring has quietly become the most extensive private video surveillance network in the world.

The device itself reads like a standard Internet of Things product spec sheet: motion-activated video recording, cloud storage, mobile notifications, live streaming. But Ring is more than just a smart doorbell - it's a complete surveillance ecosystem. Video clips become shareable content on social networks, particularly Ring's own hyper-local platform Neighbors. Amazon curates the most interesting clips for their RingTV channel. The system acts simultaneously as a private security device, a state surveillance tool, a content creation platform, and a social network. And at its core, Ring exists to collect data.

Your Digital Footprint on Your Own Doorstep

Every Ring device creates an extensive digital footprint through both active and passive data collection. Let's imagine you and your neighbors have Ring doorbells and use the Neighbors app.

Simply registering your device gives Ring your name, contact details, address, payment info, and wifi network data. If you connect social accounts, they get that data too. The BBC found through FOI requests that Ring "keeps records of every motion detected by its doorbells, as well as the exact time they are logged down to the millisecond." In one case study, Ring recorded nearly 5,000 discrete actions over just 130 days - every app open, zoom, screen tap, and live view.

The doorbell captures video and audio within a 155° field of view up to 25 feet away, with audio detection reaching 20 feet. In most residential settings, this means recording cars and pedestrians simply passing by. The Neighbors app adds another layer, collecting your posts, interactions, chats, location data, and content consumption patterns.

Your neighbors who don't own Ring devices? They're being recorded too. Every time they walk or drive past a Ring doorbell, they become part of the dataset. Non-participation doesn't exempt you from surveillance - in fact, it becomes a data point itself.

While Ring users technically own their recorded content, Amazon's Terms of Service grant the company, its marketing partners, and law enforcement unfettered access to both content and user data. This gets combined with machine learning algorithms for everything from tracking "suspicious persons" to bank loan analysis to welfare fraud prediction.

We must understand this data collection as a form of capital accumulation. The real value for Amazon isn't in selling doorbells - it's in harvesting the data those doorbells generate. This is extractive data capitalism at its finest, taking data without meaningful consent or compensation.

Ring is just one small part of Amazon's vast data empire, alongside shopping habits from their store, reading preferences via Kindle, viewing data from Prime, and more. While the full scope of how Amazon combines these datasets remains opaque, we know they use them for products like Amazon Forecast, which applies machine learning to deliver "highly accurate time-series forecasts."

The Racial Politics of "Suspicious Activity"

Ring is unambiguous about its intended purpose - protecting homes and communities from crime. As founder Jamie Siminoff stated: "Neighbors is a major step forward in advancing the Ring mission, which has been clear and enduring since I created Ring four years ago: to reduce crime in neighbourhoods. We, as a company, are unapologetically passionate about this."

But how exactly does a wifi doorbell prevent crime? Unlike physical barriers, surveillance systems don't actually make break-ins harder. Traditional CCTV aims to deter by increasing perceived risk through visible presence. Ring doorbells are subtle by design, lacking the obvious signage of traditional security systems.

The Intercept revealed Amazon's plans to add biometric recognition to Ring, automatically alerting owners when "suspicious" individuals appear on camera. This makes Ring's true purpose clear - it's a tool for observing, cataloging, reporting and sharing activities deemed "suspicious" by users or algorithms. The critical question becomes: what constitutes suspicious, and how is that encoded in data?

Data doesn't exist in a vacuum - it requires classification to become meaningful. This datafication process necessarily reproduces existing social and cultural biases. The Excavating AI project's analysis of ImageNet, a major machine learning training dataset, found categories like "Bad Person," "Crazy," "Failure," "Loser," along with numerous racist and misogynistic terms. Data is never neutral.

Since the civil rights era, overt racial inequality has given way to "colorblind" practices that provide "raceless" explanations for racial issues. In white-dominated spaces, whiteness becomes the invisible background against which people of color are compared. Residents often view their neighborhoods as "devoid of race" until people of color appear.

Sara Ahmed's work on "the stranger" is particularly relevant here. The suspicious person isn't just someone unrecognized, but someone already marked as not belonging, "embodying that which must be expelled from the neighborhood." These "strangers" are produced as dangerous figures threatening community purity, primarily through discourses around criminality.

Research consistently shows that as the percentage of young Black men in a neighborhood increases, white residents' perceptions of crime rise - even when controlling for actual crime rates. In white-coded spaces, Black men report being constantly monitored and harassed during routine activities.

When Ring Meets Law Enforcement

Unlike other tech companies that downplay their relationships with law enforcement, Ring actively courts police partnerships and uses them as a marketing tool. Through the Ring Neighborhoods Portal, law enforcement gets real-time access to "crime-related" posts within their jurisdiction.

Police can easily locate Ring devices and request footage - Ring even coaches police on obtaining footage without warrants. When users are uncooperative, police can simply ask Ring to preserve the video.

This fits perfectly with "order maintenance" or "quality of life" policing strategies that focus on disorder rather than serious crime. Despite pop culture depictions, police spend only about 4% of their time on violent crime, with over 35% handling non-criminal calls. Of roughly 14 arrests per officer annually, about 12 are for minor infractions like public drinking or vandalism.

Combined with "zero tolerance" approaches, this effectively criminalizes poverty and "anti-social" behavior. Consider "Stop and Frisk" - in 2011 alone, New York police conducted nearly 700,000 warrantless searches. 91% targeted people of color, and 88% were completely innocent.

Ring and similar technologies let police expand this surveillance into virtual residential spaces through digital policing targeting marginalized groups online. The NYPD's "Operation Crew Cut" monitored minority youth social media, using photos and messages to pursue conspiracy charges against many innocent individuals.

This manual surveillance is now automated. Ring data feeds into systems like ATLAS, used by U.S. immigration services to screen benefits applications for "fraud." We know little about how ATLAS works, but documents show it analyzes "relationships among individuals" for potential criminal connections and can target immigrants by race and ethnicity in "exceptional instances" - a dangerous loophole for discrimination.

The Future of Surveillance

Personal data ecosystems fundamentally shape our lives in ways we rarely notice. Rather than providing objective truth, data reproduces and reinforces existing social hierarchies, particularly around race. As these systems become more automated and opaque, the need for critical oversight becomes more urgent.

Ring exemplifies surveillance capitalism's next frontier - turning our homes and neighborhoods into data extraction points while automating and amplifying existing biases. We must question not just the privacy implications, but the broader social impact of normalizing constant surveillance and algorithmic suspicion.

The solution isn't simple, but it starts with understanding these systems not as neutral technologies, but as powerful tools for social control. Only then can we begin to imagine and demand alternatives that enhance safety and community without sacrificing civil rights and human dignity.

HELLO@SUDOCULTURE.COM

THERE IS NO PROBLEM THAT A LIBRARY CARD CAN'T SOLVE.

© 2024

HELLO@SUDOCULTURE.COM

THERE IS NO PROBLEM THAT A LIBRARY CARD CAN'T SOLVE.

© 2024

HELLO@SUDOCULTURE.COM

© 2024