As AI agents and automated decision-making (ADM) tools move into hiring, credit, insurance, and customer support, the legal stakes are rising fast. In this episode of Today in Tech, attorney Rob Taylor (Carstens, Allen & Gourley) explains the real compliance obligations behind ADM—disclosure and consent, explainability, data-retention requirements (e.g., multi-year hiring records), and how bias creeps in even without protected attributes. We unpack recent lawsuits (from call analytics to AI resume screening), why developers as well as deployers face liability, and how to build effective AI governance with true human-in-the-loop oversight.Watch the video above and read the full transcript below to learn practical steps: inventory where ADM is used, map data flows, add clear notices/opt-outs, preserve decision data, stress-test for bias, and demand accountability from vendors. Essential guidance for legal, risk, HR, and AI teams seeking safer, compliant AI adoption.
Register Now
Keith Shaw: As companies look toward adopting more AI and automated decision-making tools to become more efficient, a minefield of legal issues and complications is rising. AI agents on the horizon further muddy the waters when it comes to risk, liability, and accountability.
On this episode of Today in Tech, we’re going to explore the many legal issues companies may face with the rise of agent-tech AI. Hi, everybody.
Welcome to Today in Tech. I’m Keith Shaw. Joining me on the show today is Rob Taylor. He is an attorney with Carstens, Allen & Gourley, and he does extensive work in the area of automated decision-making. Welcome to the show, Rob. Rob Taylor: Thanks for having me, Keith.
Happy to be here. Keith: Let’s jump right in. What mistakes are you seeing companies make right now in their deployment of automated decision-making tools? And even if they’re technically not using AI, there are a lot of tools out there making decisions, right?
Rob: One big mistake I see is that many companies are singularly focused on AI. They focus on compliance with AI laws and regulations without considering that what they’re rolling out may actually be classified as ADM — automated decision-making.
That’s a whole other area of law, regulations, and compliance obligations. ADM isn't just about artificial intelligence; it's much broader. Anytime you roll out a solution that makes automated decisions affecting individuals or their livelihood, you may fall under ADM regulations.
Another common myth is that companies assume if they aren’t making the final decision, ADM doesn’t apply. That’s not necessarily true. There's a global patchwork of ADM laws — some cover only final decisions, while others cover interim decisions.
And even if the law doesn’t apply to interim decisions, companies may still face liability under other laws, as we've seen in recent litigation. Keith: When did we start seeing this wave of ADM laws? Is this mostly recent? And are these on the federal level, state level, or international?
Why were these regulations initiated?
Rob: It mostly comes down to individual rights. The intent behind these laws is that individuals shouldn’t be forced to have consequential decisions made solely by a tool — they have a right to human involvement. That’s the common theme worldwide.
In some jurisdictions, the developer or deployer of the ADM system must offer individuals an opt-out option so they aren’t forced through automated decision-making.
Keith: So is this a newer development — within the last five years — or has this been happening for decades? Rob: It’s more recent. As AI solutions become more agentic and impact individuals directly, companies are increasingly falling within the scope of ADM laws.
Yet many companies aren't even aware ADM applies to them — they think they’re just releasing “an AI solution,” not realizing it triggers ADM rules as well.
Keith: Can you give some real-world examples? What industries are using these tools? Rob: Sure.
Major areas include: * Credit scoring and creditworthiness decisions * Insurance underwriting and claims decisions * Hiring and talent acquisition — resume screening, ranking, automated interviews * Employee assessments — skills tests, behavioral evaluations, cultural fit screening These are high-risk scenarios because they affect an individual’s livelihood.
We’ve already seen litigation in hiring before AI, and it's increasing now with AI systems.
Keith: So if we were doing a job interview and an automated interview tool analyzed my facial expressions or body language, it could impact whether I advance in the process? Rob: Exactly. Even if facial expression analysis isn’t the final determinant, it may influence the decision.
ADM rules are designed to address this — life-altering decisions shouldn't be made by a tool without disclosure and consent.
Keith: So the key is transparency — informing users the technology is being used? Rob: Yes. Disclosure and consent are low-cost, highly effective protections. Even if the law doesn’t explicitly require it, it's a best practice. And companies must consider not only interactive use, but also internal AI use.
For example, Patagonia was sued because a third-party AI tool analyzed customer service calls without notifying customers. Even internal use requires clarity if the AI processes consumer information.
Keith: Will we start seeing more upfront disclosures? Almost like a “This call uses AI” message? Rob: I think so. And why hide it? Transparency prevents litigation. But transparency raises another issue: explainability.
Some ADM laws — like new California regulations — require employers to retain data used in hiring decisions for four years so individuals can challenge decisions. This conflicts with typical data-minimization practices, so companies must understand both AI and ADM frameworks.
Rob: Even if ADM laws don’t explicitly require data retention, companies may need evidence to prove decisions weren't discriminatory. So retaining data becomes essential. Keith: And hiring is where we’ve seen major bias issues — resumes filtered by gender, race, age, university, etc. What should companies be watching for?
Rob: Bias can arise unintentionally, especially when models learn from historical hiring data. Even if companies exclude protected attributes, AI can infer correlations — like universities attended — that disproportionately impact certain groups. Most bias is unintentional, but still actionable.
Companies need to proactively identify weak points in their system and test for bias.
Keith: But with thousands of resumes, humans can't review them all either. So both humans and AI introduce bias. Rob: True, but AI creates scale. If AI filters 1,000 resumes and ranks only 20, recruiters start there and may never see qualified candidates outside that group.
We've seen this in lawsuits — such as those targeting Workday, where AI allegedly screened out applicants without human review.
Keith: Interestingly, they're suing the developer rather than employers. Do you expect more of that? Rob: Yes. Courts may hold developers liable when models are inherently biased. Developers are the ones who understand the technology, and plaintiffs often target deep-pocket defendants.
We’ll likely see shared responsibility across developers and deploying companies going forward.
Keith: Do we need more laws, or do existing laws already cover most situations? Rob: That’s one of the biggest myths — that without AI-specific laws, it's the Wild West. Existing laws like employment discrimination and consumer protection often apply.
We don’t need laws to specifically say, “You cannot discriminate with AI.” Discrimination is already illegal, regardless of whether a human or AI does it.
Product liability principles are also emerging in AI cases, and the law will evolve as needed.
Keith: Where will we see ADM litigation next? Rob: Anywhere automated systems make consequential decisions about individuals: * hiring * credit * Insurance * healthcare * college admissions
Keith: So what should companies do? Who do you advise — legal teams, developers, tech teams? Rob: All of them. To evaluate risks, you need to understand engineering, data flows, decision logic, and human oversight. Many companies form AI governance committees, but they lack AI expertise.
That’s a mistake — someone with AI knowledge must be involved, or they should bring in experts.
Poor understanding leads to failed AI deployments. We’ve seen reports that ~95% of AI deployments fail or don’t deliver ROI, often due to lack of expertise. Companies are now increasingly hiring AI consultants and experts instead of relying solely on internal teams.
Keith: So do things get better or worse from here? Rob: Litigation teaches lessons. Smart companies learn from others’ mistakes and avoid easy-to-prevent lawsuits — like failing to notify customers about AI use. But many still don't know what they don't know.
It's an exciting time. Technology and laws are rapidly evolving. Liability frameworks are developing, and companies need to stay ahead.
Keith: Rob, thanks again for joining us and walking through these legal liability issues. Rob: Thank you for having me. Great discussion. Keith: That’ll do it for this week's show. Be sure to like the video, subscribe, and comment below.
Join us every week for new episodes of Today in Tech. I'm Keith Shaw — thanks for watching.