HR Startup Lattice Abandons AI Employee Treatment Initiative

A spokesperson from Lattice recently contacted me to share that the company has decided against proceeding with its initiative to integrate AI as formal employees. CEO Sarah Franklin explained, “While this concept sparked considerable discussion and inquiries, many questions remain unanswered. We remain committed to collaborating with our clients on the responsible deployment of AI but will not pursue the integration of digital workers into our product.”

The original article is as follows:

The rapid advancement of generative AI in recent years has raised concerns in the workforce: Could companies eventually replace human workers with AI tools? The revolution hasn’t arrived just yet. Companies are cautiously exploring the use of AI for tasks traditionally performed by humans, but most are hesitant to outright replace human employees with machines. However, one company is boldly embracing the AI future by formally onboarding AI bots as employees.

Lattice is “hiring” AI bots

The company in question, Lattice, made headlines with this announcement, referring to these bots as both “digital workers” and “AI employees.” CEO Sarah Franklin believes that the AI workplace revolution has arrived, and companies like Lattice must adapt accordingly. For Lattice, this means treating an AI tool that integrates into their workspace as if it were a human employee. This includes onboarding the bot, setting goals for its AI functions, and providing feedback. Lattice will maintain employee records for these “digital workers,” integrate them into their human resource management system, and provide them with training akin to what a traditional employee would receive. These “AI employees” will also have human managers, at least for the time being.

Franklin shared this news via LinkedIn, sparking significant discussion across platforms like Reddit and X. In her post, Franklin acknowledged that “many questions arise from this process, and we do not yet have all the answers,” but expressed confidence in breaking new ground and challenging conventions. (Comments on her post have reached 314, though they are currently disabled.) In a separate post on Lattice’s official site, Franklin anticipated some of these questions, including inquiries about the implications of hiring digital workers, their onboarding process, performance metrics, and the broader impact on jobs, including future prospects for our children. She also questioned whether these AI entities could share human values or if such anthropomorphism of AI is appropriate.

Lattice envisions AI employees integrated into their workplace suite as illustrated in an organizational chart screenshot. “Piper AI,” depicted as a sales development representative, is part of a three-person team reporting to a manager. Piper AI has a complete employee record, including a legal name (Piper AI), a preferred full name (Piper AI), a work email (esther@qualified.com), and a bio stating, “I’m Piper, an AI tool used for lead generation, note-taking, email drafting, and scheduling your next call.” (The origin of “Esther” remains unclear.)

This isn’t Lattice’s first foray into AI; the company already offers AI-powered HR software. For Franklin and Lattice, this announcement likely aligns with a preexisting AI strategy. However, to outsiders, the concept may seem perplexing.

“AI employees” are a contentious concept

Without additional context, this development strikes me as highly unconventional. Integrating an AI bot into a platform is one thing—many companies have done so successfully. For example, Piper AI could function effectively as an assistant within your workspace, aiding with tasks like scheduling meetings or drafting emails. However, Lattice’s approach of “hiring” and treating an AI bot as a human employee, albeit without compensation and benefits, raises questions. Will Piper AI enjoy unlimited paid time off, or will it be expected to work continuously, year-round?

To me, terms like “digital workers” and “AI employees” appear more like buzzwords, and integrating AI tools into employee resources seems to prioritize appearances. Lattice may present this move as a genuine embrace of AI technology, impressing individuals fascinated by cutting-edge tech without a full grasp of its operations. However, AI lacks genuine intelligence—its responses are based on predictive models trained on vast datasets of words and phrases. If designed to take notes during meetings, an AI bot will fulfill this role regardless of whether it is formally onboarded as staff.

Moreover, attributing too much autonomy to “AI employees” could backfire when these bots inevitably provide incorrect information. AI often generates speculative responses, presenting fabricated data as factual. Despite extensive training data, companies have yet to resolve this issue and now caution users to verify AI-generated content. Although humans frequently err, some individuals might naively trust the information provided by their AI coworker, particularly if the technology is marketed as “the next big thing.”

It’s challenging to imagine how a human employee might feel if instructed to manage a glorified chatbot as they would a new hire. (“Hey Mike, you’re now responsible for supervising Piper AI. Ensure weekly check-ins, provide feedback, and monitor the bot’s development, even though it isn’t truly real. Rest assured, we have no plans to replace you with a digital worker.”)

Exit mobile version