Content
summary Summary

While employees increasingly use AI tools like ChatGPT for their work, companies are falling behind with their policies and controls.

Ad

Workers are adopting AI tools like ChatGPT much faster than companies can develop appropriate guidelines. A Federal Reserve Bank of St. Louis survey found that nearly a quarter of US workers now use generative AI weekly—reaching up to 50 percent in software and finance sectors.

The Financial Times (FT) reports that by September, less than half of surveyed companies had established concrete rules for AI use. As a result, many employees are experimenting with these new technologies in secret.

From total bans to controlled use

Many large companies initially responded with bans: Apple, Samsung, and Goldman Sachs prohibited their employees from using ChatGPT, primarily due to data security concerns.

Ad
Ad

Now, more companies are moving toward controlled usage. US retail giant Walmart has developed its own AI assistant for internal use, FT reports. At the same time, the company monitors how employees use external AI tools on company devices.

"We started at ‘block’ but we didn’t want to maintain ‘block’" Jerry Geisler, Chief Information Security Officer at Walmart, told the FT. "We just needed to give ourselves time to build . . . an internal environment to give people an alternative."

Employees hide AI use

A particularly sensitive issue: According to a Slack survey, almost half of all office workers wouldn't tell their supervisors they're using AI. They fear being seen as lazy or incompetent, or risking job cuts. A Microsoft and LinkedIn survey found similar results.

This concern is illustrated by the case of a 27-year-old pharmaceutical researcher who anonymously shared his experiences with the FT: He secretly used ChatGPT for his programming tasks because there were no clear guidelines. "I couldn’t see a reason why it should be a problem but I still felt embarrassed," he was quoted as saying.

The unclear legal situation also makes it difficult for companies to develop long-term AI strategies. Both the US and EU, as well as the UK, are currently drafting relevant laws. The EU AI Act, in particular, places numerous requirements on companies regarding AI use. However, many questions about intellectual property, data protection, and transparency requirements remain unresolved.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Recommendation
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • Employees are increasingly using AI tools such as ChatGPT in their work, but companies are lagging behind in putting policies and controls in place. According to surveys, up to 50 percent of employees in some industries use generative AI on a weekly basis, but less than half of companies had concrete policies in place as of September, reports the Financial Times.
  • Many companies initially responded with outright bans, but are now focusing on controlled use. Walmart, for example, has developed its own internal AI assistant and monitors the use of external tools on company devices.
  • One tricky problem is that almost half of office workers would not tell their bosses about their AI use - for fear of being seen as lazy or risking their jobs. The unclear legal situation also makes it difficult for companies to develop long-term AI strategies, as many questions about intellectual property and data protection remain unanswered.
Sources
Max is managing editor at THE DECODER. As a trained philosopher, he deals with consciousness, AI, and the question of whether machines can really think or just pretend to.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.