Skip to content

This Company Wants to Onboard 'AI Employees,' Whatever That Means

Your next coworker might be entirely digital.

This Company Wants to Onboard 'AI Employees,' Whatever That Means
This Company Wants to Onboard 'AI Employees,' Whatever That Means

This Company Wants to Onboard 'AI Employees,' Whatever That Means

The rapid rise of generative AI over the past couple years has inspired concern in the workforce: Will companies simply replace human employees with AI tools? The revolution hasn't quite come yet: Companies are dipping their toes into using AI to do work normally done by humans, but most are stopping short of explicitly replacing people with machines. However, one organization in particular is embracing the AI future with gusto, onboarding AI bots as official employees.

Lattice is "hiring" AI bots

The company in question, Lattice, made the announcement on Tuesday, referring to these bots as both "digital workers" and "AI employees." The company's CEO, Sarah Franklin, believes the AI workplace revolution is here, and, as such, companies like Lattice need to adapt. For Lattice, that means treating an AI tool they'll integrate into their workspace as if it were a human employee. That vision includes onboarding the bot, setting goals for the AI, and offering the tool feedback. Lattice will give these "digital workers" employee records, add them to their human resource management system, and offer them the same trainings a typical employee would receive. "AI employees" will also have managers, who, I assume, will be human. (For now.)

Franklin also shared the news on LinkedIn, in a post that has done the rounds on social media sites from Reddit to X. In this post, Franklin acknowledges that "this process will raise a lot of questions and we don't yet have all the answers," but that they're looking to find them by "breaking ground" and "bending minds." (This post has 314 comments, but they are currently disabled.) In a separate post on Lattice's site, Franklin shares some of those potential questions, including: "What does it mean to hire a digital worker? How are they onboarded? How are they measured? What does this mean for my job? For the future jobs of our children? Will they share our values, or is that anthropomorphism of AI?"

You can see in this blog post how Lattice envisions AI employees in their workplace suite: In one screenshot, an org chart shows "Piper AI," a sales development representative, as part of a three "person" team all reporting to a manager. Lattice gives Piper AI a full employee record, including legal name (Piper AI), preferred full name (Piper AI), work email ([email protected]), and a bio, which reads, "I'm Piper, an AI tool used to generate leads, take notes, draft emails, and schedule your next call." (So where does "Esther" come from?)

This is not the company's first foray into AI: Lattice offers companies AI-powered HR software. To Franklin, and Lattice as a whole, this announcement likely fits an AI-plan they've developed. To outsiders, however, it's entirely bizarre.

"AI employees" are bogus

Without much further context, I find all this deeply weird. It's one thing to integrate an AI bot into your platform, as many companies have done and continue to do. I mean, Piper AI would make sense as an assistant that hangs out in your work suite: If you want to use it to schedule a meeting or draft an email, great. If not, ignore it. Instead, Lattice wants to "hire" an AI bot and treat it the same way it treats you, albeit without the pay and the benefits. Does Piper AI also get unlimited PTO, or will it be forced to work 24/7, 365 days a year?

To me, "digital workers" and "AI employees" are buzzwords, and "onboarding" AI tools to employee resources is all about appearances: Lattice can say it's embracing AI in a "real way," and important people who care about cutting-edge tech but don't fully understand how it works will be impressed. But "AI" isn't actually intelligent. There's no "worker" to hire. Generative AI is based on a model, and responds to prompts based on that model's training set. For a text-based large language mode, it isn't actually "thinking"; rather, it's predicting what words should come next, based on the millions, billions, or trillions of words it has seen before.

If the tool is designed to take notes during a meeting, it's going to take notes whether you assign it a manager or keep it as a floating window in your management system. Sure, you can train the bot to respond in ways that are more useful to your organization and workflow, if you know what you're doing, but that does not require you to onboard the bot to your staff.

In fact, giving "AI employees" too much credit could backfire when the bots inevitably return incorrect information in their queries. AI has a habit of hallucinating, in which the bot makes things up and insisting it's true. Even with huge amounts of training data, companies have not solved this problem, and now slap warnings on their bots so you know, "Hey, don't just trust everything this thing says." Sure, humans make mistakes all the time, but some people might be more inclined to believe what their AI coworker tells them, especially if you're pushing the tech as "the next big thing."

I'm struggling to imagine how an employee (human, mind you) would feel when their boss tells them they have to start managing a glorified chatbot as if they were any typical new hire. ("Hey Mike: You're going to be managing Piper AI from now on. Make sure to meet weekly, give feedback, and monitor the growth of this AI bot that isn't actually real. We totally won't replace you with a digital worker, too, so don't worry about that.")

I have reached out to Lattice with questions regarding this new policy, and will update this story if I hear back.

The author expresses skepticism about Lattice's approach, stating that "AI isn't actually intelligent" and calling the term "digital workers" and "AI employees" buzzwords. They argue that onboarding AI tools into employee resources is more about appearances for Lattice, as it allows the company to claim it's embracing AI in a "real way." Additionally, the author mentions that AI tends to "hallucinate," often giving incorrect information.

Read also:

Comments

Latest

Grave accusations levied against JVA staff members in Bavaria

Grave accusations levied against JVA staff members in Bavaria

Grave accusations levied against JVA staff members in Bavaria The Augsburg District Attorney's Office is currently investigating several staff members of the Augsburg-Gablingen prison (JVA) on allegations of severe prisoner mistreatment. The focus of the investigation is on claims of bodily harm in the workplace. It's

Members Public