Busting Unions with AI: How Amazon Uses AI to Crush Labor Movements
From surveillance scanners to predictive modeling, Amazon has built an AI-powered system to prevent organizing before it begins.
It takes time to create work that’s clear, independent, and genuinely useful. If you’ve found value in this newsletter, consider becoming a paid subscriber. It helps me dive deeper into research, reach more people, stay free from ads/hidden agendas, and supports my crippling chocolate milk addiction. We run on a “pay what you can” model—so if you believe in the mission, there’s likely a plan that fits (over here).
Every subscription helps me stay independent, avoid clickbait, and focus on depth over noise, and I deeply appreciate everyone who chooses to support our cult.
PS – Supporting this work doesn’t have to come out of your pocket. If you read this as part of your professional development, you can use this email template to request reimbursement for your subscription.
Every month, the Chocolate Milk Cult reaches over a million Builders, Investors, Policy Makers, Leaders, and more. If you’d like to meet other members of our community, please fill out this contact form here (I will never sell your data nor will I make intros w/o your explicit permission)- https://forms.gle/Pi1pGLuS1FmzXoLr6
Disclaimer- This one’s all me. Not my coworkers, clients, the chocolate milk cult, my fight club, and/or my halal guy. I do this completely alone, and if there’s fallout, it lands here, and nowhere else. My words, my responsibility.
Play the man, Master Ridley; we shall this day light such a candle, by God’s grace, in England, as I trust shall never be put out
-Farenheit 451
While Tech and AI are increasing the rate of innovation and progress, there is a concerning undercurrent of the tech industry shifting towards surveillance capitalism and the monopolization of power. This was shown in various instances, such as various tech companies engaging in automated censorship, Y-Combinator investing in AI for sweatshops, increasing numbers of startups dedicated to mass-surveillance, and an increasing apathy from supposedly digitally-fluent groups towards data privacy and protection.
Such trends have dire consequences for people. The study, “Weaponizing the Workplace: How Algorithmic Management Shaped Amazon’s Antiunion Campaign in Bessemer, Alabama” highlighted one such case, where tech giant Amazon used multiple AI techniques to fight labor-unionization (if you’d rather not read the paper, you can see a good coverage here). This is part of a larger set of techniques used by the tech giant (and others). The following article will overview this and cover ways to fight back. B/c if we don’t fight for ourselves, then no one else will.
Let’s get into it.
Executive Highlights (TL;DR of the Article)
This article breaks down how Amazon has developed and deployed one of the most sophisticated systems of algorithmic labor control in the world — and what it will take to dismantle it.
Here’s what you need to know:
Amazon’s Workforce Is Governed by Code, Not Managers
Amazon doesn’t rely on human supervisors to manage its workers — it uses an integrated AI-driven architecture made up of scanners, cameras, biometric systems, and performance algorithms. This system monitors productivity in real-time, logs every second of inactivity, and can auto-trigger warnings and terminations without human intervention.
Surveillance Is Not Just About Watching — It’s About Control
“I left Amazon because my body couldn’t handle it anymore. If you don’t move the box, the next person will move the box. … They’re ready to jump onto the next person. When someone gets that thought into your head that you’re so easily replaceable, you’re more willing to ignore your injuries. … One of my biggest concerns was what was I going to injure today.”
— MELISSA OJEDA, FORMER WAREHOUSE WORKER (this is a great source to study the situation in a lot of detail)
Every tool in the stack — from Time Off Task (TOT) systems to AI cameras and facial recognition — is designed to shape worker behavior. The goal isn’t just to track productivity but to create an atmosphere of fear and isolation. The system actively prevents the formation of trust and solidarity by atomizing social bonds, manipulating shift schedules, and isolating suspected “organizers” algorithmically.

Amazon Q and Predictive Suppression Are the Next Stage
With systems like Amazon Q, the company moves from monitoring behavior to predicting and suppressing potential dissent. These tools can integrate across Slack, email, warehouse data, and sentiment trackers to forecast organizing behavior — down to the individual — and intervene preemptively. Personalized anti-union propaganda, risk profiling, and isolation tactics are already possible and being refined.
What Happens at Amazon Will Not Stay There
If Amazon’s system works — automated management, predictive anti-union tactics, and AI-driven behavioral shaping — it will be exported. Schools, hospitals, logistics companies, and offices will adopt the same principles. What Amazon builds now is the prototype for future workplace governance across sectors.

Resistance Must Go Beyond Complaints — It Needs Strategy
The article offers a concrete playbook that we can use to push back:
Data poisoning: Disrupt algorithmic training through coordinated inefficiencies and adversarial inputs.
Visual obfuscation: Use fashion and physical hacks to interfere with facial recognition and computer vision systems.
Legal warfare: File simultaneous GDPR, labor, and AI transparency complaints globally to overwhelm Amazon’s legal shields.
Financial targeting: Quantify Amazon’s algorithmic risk and pressure institutional investors on long-term liability.
Operational disruption: Organize “flash mobs” of inefficiency and micro-strikes during peak periods to hit fulfillment flow and expose fragility.
The Fight Is Not Against AI — It’s Against Who Controls It
This isn’t a rejection of technology. It’s a rejection of the one-sided power dynamic where systems manage workers they can’t see, influence, or resist. The enemy isn’t the algorithm — it’s the asymmetry between those who write the rules and those who are subject to them.
If Amazon breaks labor with AI, the model gets sold everywhere. But if workers break the model first, the blueprint collapses.
This article explains how.
I put a lot of work into writing this newsletter. To do so, I rely on you for support. If a few more people choose to become paid subscribers, the Chocolate Milk Cult can continue to provide high-quality and accessible education and opportunities to anyone who needs it. If you think this mission is worth contributing to, please consider a premium subscription. You can do so for less than the cost of a Netflix Subscription (pay what you want here).
I provide various consulting and advisory services. If you‘d like to explore how we can work together, reach out to me through any of my socials over here or reply to this email.
Section 2: The Control Stack: Amazon’s Cutting-Edge Algorithmic Stack for Worker Tracking
Amazon doesn’t need to send union-busting thugs to the warehouse floor. It sends code.
At the center of Amazon’s labor strategy lies a multi-layered architecture of algorithmic control. The company has built a surveillance and enforcement stack designed not only to optimize logistics but to preempt solidarity, fracture trust, and atomize resistance.
Amazon doesn’t care about there employees at all it’s all talk. I called for a stop work because someone wanted to bypass a safety device on a sorting machine. I got over ruled by Manament and safety. I was targeted for it. I was a lead over five techs I always toke safety very serious. Nothing in the world is worth more than a human life. At Amazon they’re more worried about a package leaving the building.
1. Scanners, Quotas, and Time Off Task (TOT)
“… the documents relayed how Amazon monitors worker productivity using a system that “automatically generates any warnings or terminations regarding quality or productivity without input from supervisors.” Amazon’s system also tracks “time off task,” or what it refers to as TOT. If workers take breaks from scanning packages for too long, the system generates warnings that eventually can lead to firings.”
Every Amazon warehouse worker is a node in a quantified workflow. Scanners track each scan, item moved, and every single second between actions. Fall behind the algorithmically set “rate,” and the system flags you. Warnings, discipline, and even termination are often automated, no messy human interaction needed.
The numbers here are brutal: 5 minutes of “gap time” before TOT starts counting. 30 minutes gets you a warning. One hour triggers the discipline cascade. Two hours in a single day? The algorithm fires you. No manager needed. It’s automated management by execution queue.
Now Amazon claims to have gotten rid of TOT, but how to true this in practice seems to be a bit sketchy- with some workers reporting that their managers are more understanding while others claim that they’re still being monitored closely.
During Bessemer’s union drive, Amazon pulled “algorithmic slack-cutting” — temporarily easing TOT enforcement to create false relief. Practices like this are used to mislead workers against consolidating by providing temporary carrots and “IOUs”.
2. AI-Powered Cameras and Biometric Tracking
Surveillance is total. Ceiling-mounted cameras, vision algorithms, and motion analysis software track everything from posture to proximity. France’s data protection authority found it so invasive they fined Amazon €32 million for “excessively intrusive” monitoring-
“The French SA found several breaches of the GDPR regarding:
Warehouse stock and order management:
Failure to comply with the principle of data minimisation (Article 5.1.c GDPR).
Failure to ensure lawful processing (Article 6 GDPR) by using three indicators which are illegal:
the “Stow Machine Gun” indicator, which signals an error when an employee scans an item “too quickly” (i.e. in less than 1.25 second after scanning a previous item);
the “idle time” indicator, which signals periods of scanner downtime of ten minutes or more;
the “latency under ten minutes” indicator, which signals periods of scanner interruption between one and ten minutes.
The French SA noted that the processing of these three indicators could not be based on legitimate interest, as it led to excessive monitoring of the employee regarding the objective pursued by the company.
2. Work schedule and employee appraisal:
Failure to comply with the principle of data minimisation (Article 5.1.c GDPR). The work schedule in the warehouses, along with the assessment and training of the employee do not require access to every detail of the data and statistical indicators provided by the scanner used by the employee and reported over the last month.
Failure to comply with the obligation to provide information and transparency (Articles 12 and 13 GDPR).
Video surveillance processing:
Failure to comply with the obligation to provide information and transparency (Articles 12 and 13 GDPR).
Failure to the obligation to ensure security of personal data (Article 32 GDPR).”
However, it is worth noting that as long as fines stay the primary punishment for misconduct, companies are incentivized to treat them “as the cost of doing business”- especially when the cost is a fraction of the revenue generated by the company.

In delivery vans, Netradyne’s Driveri system watches drivers with four lenses. Yawn? Flagged. Check your mirror? Dinged. Scratch your face? That’s “distracted driving.” Drivers must sign biometric consent forms — hand over your facial recognition data or hand over your job.
The use of these systems often pressures drivers into driving more aggressively to meet their targets, which has led to several problems, the responsibility for which Amazon sidesteps-
This is not dissimilar to how Tech Companies have avoided the responsibility for child labor and other labor violations in the past by arguing that these violations were done by their subcontractors and not directly by them.
3. The Social Graph Surveillance Layer
“Wiggin writes that the extreme stress cultivated in the Bessemer warehouse from AI and algorithmic surveillance led workers to cluster into social media groups to discuss grievances and figure out how to circumvent the specific contours of Amazon’s digital surveillance.
According to the study, Amazon “ran a social media surveillance program that monitored more than 43 Facebook groups, most of which were nominally private, as well as numerous Web sites [and] subreddits.” It also found that “[t]he program’s described aim was to ‘capture’ and categorize posts of interest for potential investigation, including those mentioning complaints from warehouse workers and planned strikes or protests.””
Amazon monitors communication inside and out. The mandatory A to Z app pushes anti-union propaganda disguised as updates in order to dissuade workers from unionizing.

Leaked documents show Amazon’s Global Security Operations Center tracking 43+ private Facebook groups, Reddit threads, and Twitter accounts. They’ve hired Pinkerton operatives to infiltrate worker groups. They’ve posted jobs for “intelligence analysts” to monitor “labor organizing threats.”
This is social graphing weaponized. They’re not just watching your work. They’re watching your network.
4. Amazon Q and Predictive Suppression
Amazon Q can ingest data from 50+ business tools — Slack, email, scanner logs, everything. It doesn’t just monitor. It predicts.
At Whole Foods, Amazon has used “heat maps” scoring unionization risk across 24+ factors: team sentiment, local unemployment, and even racial diversity (less diverse stores unionize more, their data scientists discovered).
With Q’s capabilities, Amazon could:
Predict organizing likelihood down to the individual
Generate personalized anti-union content at scale
Identify “labor threats” before workers know they want to organize
Auto-adjust schedules to isolate potential organizers
As AI gets stronger, this is an issue that will continue to get more refined, ultimately creating a PsychoPass-style “latent agitator score” that will start classifying people before they act. At this stage, the power imbalance b/w individuals and systems might skew too much to fix, so we have to really prevent this from happening.

The Integrated Oppression System
These aren’t separate tools. They’re components of a unified control architecture where scanner data feeds performance metrics analyzed by AI that adjusts work rates monitored by cameras feeding computer vision systems flagging “anomalous behavior” triggering social media investigations identifying you as a “risk.”
That is the new face of labor control. Your workplace is a neural net whose core function is to kill resistance before it starts.
Section 3: How Amazon Engineers Suppression
Amazon’s algorithmic control isn’t just about watching — it’s about rewiring human behavior at scale. Only noobish systems worry about catching you organizing. A company as committed to excellence as Amazon goes above and beyond, builds systems that prevent you from even thinking about it. That, kids, is how you go above and beyond for your shareholders.
Let’s cover this magical system now-
Fear as a Feature, Not a Bug
“We have an injury crisis and we’re being watched and you feel like you’re in prison.”
The architecture works through calculated terror. Workers report spending mental energy calculating whether a bathroom break will push them over TOT limits.
This isn’t accidental. As we’ve covered, Amazon tracks “team member sentiment” as a unionization risk factor. The more miserable workers feel, the more likely they are to organize — so the system maintains a precise level of fear. Just scared enough to comply, not quite desperate enough to revolt.
Fragmentation by Algorithm
Amazon’s scheduling AI doesn’t just assign shifts — it atomizes solidarity. The system can detect social clusters through various signals:
Which workers take breaks together (camera data)
Who talks to whom on the floor (proximity tracking)
Social media connections (external surveillance)
Similar TOT patterns (behavioral correlation)
Once identified, the algorithm can separate potential organizers through “operational adjustments”:
Assigning them to different shifts
Moving them to opposite ends of the warehouse
Varying break times to prevent congregation.
Creating workflow patterns that minimize interaction
Unionization and organization are processes that require a lot of trust. Having this tool in their backpocket allows them to disrupt organization efforts by preventing these bonds.
The Opacity Trap
The black-box nature of algorithmic management creates learned helplessness. Workers don’t know:
How productivity rates are calculated
What triggers automated warnings
Why some infractions get flagged and others don’t
Whether their manager even has power to intervene
This deliberate opacity serves multiple functions:
Workers can’t game the system if they don’t understand it
Inconsistent enforcement keeps everyone on edge
The algorithm becomes an unchallengeable authority
Management can claim “the computer did it” to avoid accountability
This injects a kind of learned helplessness into the workers.

“Algorithmic Slack-Cutting”:
As covered earlier, Amazon is known to have temporarily eased working conditions to tempt workers to vote against the union.
The Anticipatory Conformity Engine
“I was talking to one young man who’s a felon, and he was giving me the breakdown of how working [the warehouse] reminded him of his time in prison. The real short lunch breaks, the constant demand. For me, talking about the surveillance, it’s almost like you’re treated like a criminal. And what I mean by that is, it’s constant surveillance. And then when you leave out of the building … you have to take your hat off, search through your bags, on and on and on..”
— AMAZON WAREHOUSE WORKER, NORTH CAROLINA
Workers self-censor not because they’re definitely being watched, but because they might be. This creates “anticipatory conformity” — people police themselves harder than any external force could.
Breaking Solidarity Before It Forms
To reiterate, the architecture’s goal isn’t catching organizers — it’s preventing them from existing. This is suppression perfected — not through violence but through architecture. And the results are outstanding (if you have a lot of Amazon stock and aren’t worried about pesky things like strangers having a good quality of life).
Now, there are lots of stories on how harmful their practices are, and how Amazon, which seems to have a strong love for Responsible AI, constantly ignores shareholder requests to improve worker conditions. But I think dwelling on them would be more akin to rage-bait and repeat ideas we’ve talked about at length. So instead, let’s talk about what we can do to burn these systems to the ground. After all, we can ask the management, make social media petitions, go on walkouts, and hold little sessions. But until there is real bite behind the asks, all of this is effectively begging Amazon to change. To bring actual change, we must have iron hands under the velvet gloves.
“Working at Amazon is, I don’t know if you ever watched … Squid Game. … It is a degrading job [with] this constant surveillance. You’re standing constantly. A lot of workers … have been standing on those hard concrete floors for so long, we have lower-back issues. We have other issues that are going on physically with our bodies. If you take our facility alone, since the facility opened, every three days, first responders are called to that facility. And when I say that it’s like [Squid Game], you see co-workers, you see friends, some workers have relatives, you see relatives who pass out, who are taken out of their facility on the stretcher.”
— AMAZON WAREHOUSE WORKER, NORTH CAROLINA
Section 4: How to Fight Against Amazon's System
Every complex system has pressure points, blind spots, and dependencies. Exploiting them isn’t just possible; it’s imperative. This isn’t about asking nicely.
While we negotiate, we must also take steps to make such systems of control too costly, too unreliable, and too damaging to maintain. This doesn’t require a complete shutdown; one only needs to make the system too unprofitable to be worth it. And this is a lot more doable than some people think.
A. Coordinated Disruption: Data Poisoning & Visual Obfuscation
As the adage goes, Garbage In, Garbage Out. By intentionally messing with the systems, you can create data spikes that make the system need constant retuning and/or supervision to be useful. Both of these are expensive processes, and makes it more likely for organizations to pull them back.
TOT Swarms and “Strategic Glitches”: Amazon’s “Time Off Task” system is a prime target. Imagine coordinated, facility-wide “bathroom break waves” or simultaneous, brief “equipment check” pauses. Not enough to trigger individual mass firings, but enough to flood the system with anomalies. The AI screams “inefficiency!”; human managers get bogged down in endless reviews. The goal: corrupt their surveillance baselines, force manual overrides, and make automated discipline a logistical nightmare. It’s about increasing their operational cost of control.
Adversarial Fashion & Reflective Camouflage: Their AI cameras rely on facial recognition and pattern analysis. The use of specific techniques disrupt normal patterns, making the systems less reliable at scale. There are two ways to accomplish this.
Some of you know about adversarial perturbations, which are techniques that allow us to make minor tweaks to the input data. These tweaks are functionally indistinguishable for humans, but really mess up AI systems-

People outside Amazon can apply basic Adversarial Perturbations to their images prior to uploading them online. By using a diverse and extensive set of attacks, this will corrupt a large set of the training data. This will throw off the training of such surveillance systems, not only at Amazon but globally.
However, this is a long-term process. For workers at the factories, there are other faster solutions. Simple products like IR-deflecting clothing, makeup patterns designed to confuse facial recognition, or even just coordinated “bad hair days” if it throws off biometric scanners. CV Dazzle is an older example of adversarial fashion. While modern methods will be better adapted to the specific attacks, similar principles still apply.
Theoretically, these attacks work by altering the saliency maps that Vision Models use to classify targets effectively. While each model has individual saliency maps, they do have a fair bit of carryover. One of the areas I’m looking into now is to look for techniques that are-
Work a reasonable number of times (~10% hit rate, applied at scale is a VERY good starting point. Anything more, I’m kissing whoever comes up with the attack).
Low-cost- both monetarily and time/energy-wise.
Hard to ban/regulate.
If any of you want to get involved here, shoot me a message. This field requires a lot of trial and error, so we’ll need all the manpower/resources we can get. However, as hard as it is, the task is doable. And it’s absolutely worth doing to protect our freedom.

B. Weaponizing Truth: Leaks and Financial Pressure
Secrecy is Amazon’s ally. Transparency, especially financial transparency about their risks, is ours.
Strategic Whistleblowing — The Insider Arsenal: Every leaked “heat map,” SPOC dashboard, or Amazon Q labor suppression proposal is a bombshell. We need to cultivate and protect insiders willing to expose the architecture of suppression. Frame these leaks not as disgruntled employees, but as ethical actors revealing systemic abuse.
The Algorithmic Risk Premium — Speaking Wall Street’s Language:
Forget vague ethical appeals to investors. Let’s quantify the financial risk of Amazon’s AI control stack. Make Amazon’s practices expensive.
Target institutional investors with:
Quantified turnover costs (hiring, training, productivity loss)
Legal liability projections (GDPR fines, NLRB actions)
Reputation damage modeling
Long-term workforce depletion risks
Frame it in their language: “Amazon’s labor practices represent unpriced systematic risk.” Show them the math.
C. Global Legal & Regulatory Ambush: Death by a Thousand Cuts —
Amazon leverages a global workforce; the legal fight must be global too.
Multi-Jurisdictional Lawsuits — Coordinated Legal Warfare:
Don’t just file one ULP charge. File dozens, across states and countries. Target GDPR violations in Europe (and push for bigger fines). Exploit emerging state-level AI accountability laws (Colorado, NYC). Focus on algorithmic opacity, automated firings without human review, and the chilling effect on Section 7 rights. The goal is a “death by a thousand legal cuts,” making their current model untenably expensive and complex to defend everywhere.
Mandating Explainable AI (XAI) in Labor Discipline: This is a key demand. Push legislation requiring:
Open audit logs for automated employment decisions
Worker access to personal data and decision logic
Human review rights for algorithmic actions
Public disclosure of surveillance capabilities
Colorado’s AI Act is a template (shoutout Colorado for focusing on important AI measures and not the Hype). Replicate and strengthen across states. Force Amazon to explain why it fired someone for 1 hour 59 minutes of TOT but not 1 hour 58 minutes. The goal isn’t to win every appeal, but to make them dread the idea of appeals.
D. Operational Chaos: Direct Action & Algorithmic Sabotage
Calculated disruption can expose system vulnerabilities and build worker power.
Flash Mobs of Inefficiency — Malicious Compliance at Scale:
Not strikes, but tactical compliance slowdowns. Imagine hundreds of workers in a facility simultaneously adhering exactly to every single (often contradictory or inefficient) rule and safety protocol for a few hours. Picking at the slowest permissible “rate.” Some specific coordination tactics to limit backlash:
Use burner phones and encrypted apps
Create plausible deniability (“I felt sick”)
Document that following safety protocols causes slowdowns
Protect organizers through randomization
Strategic Sick-Outs & Micro-Strikes During Peak Pressure Points:
Prime Day. Black Friday. Q4 holiday rush. These are Amazon’s high-stakes moments. Coordinated, short-term work stoppages or sick-outs in key logistical nodes during these periods can have an outsized impact on their operations and bottom line. This requires immense solidarity and a robust support network for participating workers, but the leverage is undeniable.
Alongside all this, some additional ways to attack the system are-
Mass-report safety issues through official channels, overwhelming response systems. Proactively. Even potential issues that might not be issues, but safety first right :).
Game sentiment surveys to trigger false positives on union risk. Use ever evolving coded language to prevent systems from catching up.
Coordinate perfect compliance with every safety rule (malicious compliance)
Request human review for every algorithmic decision.
The system assumes workers will minimize friction. Add friction everywhere. Make the machine work harder than the humans. Amazon might have resources and talent to scale operations, but by creating constant “movement” within their organization, the massive scale of the system acts against it.

This is not a complete list, simply a starting point. If you have anything you want to add/comment on, would love to hear it.
Section 5: Conclusion. Beyond Amazon
This isn’t just a fight for Amazon’s soul. It’s a war for the future operating system of labor.
Amazon is not unique. It’s simply ahead. The systems it builds now — algorithmic management, predictive suppression, machine-automated obedience — will soon define the global template for how capital governs labor.
If Amazon perfects the algorithmic sweatshop, it gets packaged, productized, and sold to every employer on Earth. Today, it’s warehouse workers under the microscope. Tomorrow it’s nurses, teachers, truckers, coders. Everyone. If it works for Amazon, it will spread. If it breaks here, it can be stopped everywhere.
This is not about nostalgia for old unionism or technophobia about AI. It’s about power — who holds it, how it’s enforced, and whether we still have the ability to fight back. The enemy is not the algorithm. It’s the power asymmetry: where one side writes the code, and the other gets written into it.
The future of work and freedom is already being written. It’s up to us to decide who finishes the sentence.
Reminder- This article is my responsibility alone. All my conversations w/ Amazon employees around the timeline of this publication is purely coincidental and on unrelated topics such as the shitshow that is Arsenal (Men)’s title contentions.
Thank you for being here, and I hope you have a wonderful day.
Dev <3
If you liked this article and wish to share it, please refer to the following guidelines.
That is it for this piece. I appreciate your time. As always, if you’re interested in working with me or checking out my other work, my links will be at the end of this email/post. And if you found value in this write-up, I would appreciate you sharing it with more people. It is word-of-mouth referrals like yours that help me grow. The best way to share testimonials is to share articles and tag me in your post so I can see/share it.
Reach out to me
Use the links below to check out my other content, learn more about tutoring, reach out to me about projects, or just to say hi.
Small Snippets about Tech, AI and Machine Learning over here
AI Newsletter- https://artificialintelligencemadesimple.substack.com/
My grandma’s favorite Tech Newsletter- https://codinginterviewsmadesimple.substack.com/
My (imaginary) sister’s favorite MLOps Podcast-
Check out my other articles on Medium. : https://rb.gy/zn1aiu
My YouTube: https://rb.gy/88iwdd
Reach out to me on LinkedIn. Let’s connect: https://rb.gy/m5ok2y
My Instagram: https://rb.gy/gmvuy9
My Twitter: https://twitter.com/Machine01776819
That’s deeply concerning and unfortunately, some governments may choose to invest in and adopt such algorithms
Good insight 😃. Can i translate part of this article into Spanish with links to you and a description of your newsletter?