89% of the Workforce Uses AI, But How Many Know the Risks?

By: Georgia | Published: Jan 08, 2024

In November 2022, OpenAI released ChatGPT, marking a significant moment in technological advancement. Security company Kolide’s CEO promptly encouraged the team to explore this transformative technology.

This initiative was mirrored globally, as employers motivated their staff to harness generative AI, enhancing productivity and innovation. Within less than a year, this practice has become remarkably widespread in various professional environments.

AI Adoption Skyrockets Among Professionals

A comprehensive survey by Kolide revealed that an astonishing 89% of knowledge workers incorporate some form of AI in their monthly routines.

A digital art illustration featuring large, glossy letters "AI" in the center, surrounded by an intricate network of swirling lines and geometric patterns on a shaded purple background

Source: Steve Johnson/Unsplash

This rate of adoption is striking when compared to the gradual acceptance of technologies like email. Despite the enthusiasm for AI, there appears to be a gap in understanding its associated risks among these users.

The Overlooked Risks of AI in Professional Settings

The survey brings to light that while AI’s adoption is rapid, comprehension of its risks lags behind. It’s not about apocalyptic scenarios, but rather about legal, reputational, and financial implications.

A robotic hand with articulated fingers is poised against a backdrop of a complex digital network represented by interconnected white lines and dots on a deep blue background

Source: Tara Winstead/Pexels

Forrester’s 2024 AI Predictions Report indicates a potential rise in “shadow AI,” leading to significant regulatory, privacy, and security challenges within organizations.

AI Errors and the Phenomenon of ‘Hallucinations’

Kolide reports that AI, particularly Language Learning Models (LLMs), are prone to errors, often creating or using incorrect information.

Over-the-shoulder view of a man using a laptop displaying the ChatGPT interface on the screen, with his fingers poised over the keyboard, ready to type

Source: Matheus Bertelli/Pexels

These inaccuracies are sometimes referred to as “hallucinations.” Examples of such errors include a lawyer citing nonexistent case law, and a chatbot advising harmful actions. Forrester anticipates the advent of “AI hallucination insurance” as a response to these significant risks.

The Debate Over AI and Plagiarism

Kolide also highlights the fact that generative AI, by its nature, cannot produce entirely original content, leading to concerns about plagiarism and copyright infringement.

Close-up of a vintage green typewriter with a sheet of paper inserted that reads "COPYRIGHT CLAIM" in bold, capitalized letters

Source: Markus Winkler/Unsplash

The legal community is actively engaged in discussions to determine the boundaries of AI in this context. This debate is crucial for understanding the implications of AI-generated content in creative and legal domains.

Security Concerns with AI-Generated Code

AI’s integration into coding and software development has raised security concerns. Kolide calls attention to the emergence of malware disguised as AI tools, especially in browser extensions, which is alarming.

A woman's face is superimposed with vibrant digital graphics and lines of code, creating a double exposure effect that merges human features with symbols of technology and data

Source: ThisIsEngineering/Pexels

Companies are also wary of AI tools inadvertently collecting sensitive data, raising questions about trade secret security and the potential for malicious exploitation.


Discrepancy in AI Usage and Workplace Policies

Kolide’s Shadow IT Report highlights a discrepancy between the rate of AI usage by employees and the extent to which companies are aware or have policies in place.

A scene in an office with multiple workers focused on their computer screens, illuminated by the soft glow of desk lamps and overhead hanging bulbs

Source: Israel Andrade/Unsplash

This gap indicates a lack of oversight and potential for AI-generated content to be integrated into workplaces without proper scrutiny or governance.


Varied Company Policies on AI Usage

The survey found there is a notable disparity between the percentage of companies that permit AI usage and the actual extent of its use by employees.

A group of four professionals are seated around a white meeting table in a well-lit, modern office space. They are focused on their laptops and notes during a collaborative session, with colorful sticky notes on the wall in the background

Source: Jason Goodman/Unsplash

This discrepancy showcases the need for more coherent and comprehensive policies governing AI use in professional settings, considering both its potential and risks.


Inadequate Training on AI Risks in the Workplace

The survey also uncovers a significant gap in employee education on AI risks, with only 56% of companies providing relevant training.

Four men in a casual office setting with exposed brick walls and large windows are engaged in a project meeting. One man stands presenting at a whiteboard filled with colorful sticky notes and workflow stages, while the others, seated on a couch and office chairs, look on with laptops, actively participating in the discussion

Source: Austin Distel/Unsplash

To ensure the safe and informed use of AI technology in the workplace, Kolide recommends improved and ongoing training programs.


Employees’ Underestimation of Colleagues' AI Usage

A striking finding from the survey is the underestimation of AI usage among colleagues by employees.

Over-the-shoulder view of a person typing on a gaming laptop with a screen that reads "Introducing ChatGPT," detailing the capabilities of the AI model. The laptop displays a colorful graphic and text about ChatGPT

Source: Viralyft/Unsplash

Despite widespread use, most employees perceive their AI use as unique, which may lead to a lack of collective awareness and policy adherence regarding AI applications in professional settings.


The Need for AI Acceptable Use Policies

Given the extensive integration of AI in professional contexts, Kolide argues that the development of clear and effective AI usage policies have become essential.

A simple yet evocative silhouette of a human head in profile against a white background, filled with numerous eyes of varying sizes, representing the concept of artificial intelligence

Source: Tara Winstead/Pexels

These policies should aim to provide visibility into AI use within organizations, prevent unsafe practices, and establish guidelines for acceptable use, ensuring a balanced and safe approach to AI utilization.


Kolide's Proactive Approach to AI Policies

Kolide has implemented a comprehensive policy for AI use in the workplace.

A robotic arm with a black exterior and articulated fingers is reaching out to touch the index finger of a human hand. The human hand is visible up to the forearm and has several tattoos

Source: cottonbro studio/Pexels

Their approach includes understanding how employees use AI, blocking risky applications, and establishing a framework for acceptable AI use.