Insight: Using AI at Work—Three Professionals Caught Between Expectations and Policies
Insight: Using AI at Work—Three Professionals Caught Between Expectations and Policies
* * *
It’s 2025, and the pressure to “leverage AI for productivity gains” is everywhere. In boardrooms, performance reviews, and quarterly calls, the message is clear: work smarter, move faster, do more with less. But what happens when the tools to achieve these goals aren’t officially available? Here are three stories from the front lines of corporate AI adoption.
Story 1: Maya, Software Developer
“I’m basically doing two jobs now”
Maya joined the engineering team at a mid-size fintech company eighteen months ago. Since then, two senior developers have left, and management decided not to backfill the positions. “We’re being more strategic about headcount,” her manager explained during their last one-on-one.
The math is simple: Maya now maintains legacy code that three people used to handle, while also contributing to new feature development. Her sprint velocity is tracked, her pull requests are monitored, and everyone keeps mentioning how “AI should make developers 10x more productive.”
The company doesn’t have approved AI coding tools yet. IT is “evaluating options and working on governance frameworks.” But Maya’s deadlines haven’t changed.
So she does what most of her peers do: opens her personal ChatGPT account and pastes in code snippets. “I’m not uploading entire files,” she rationalizes. “Just functions I’m debugging. Just API calls I need to optimize.”
Last week, she uploaded a database schema to help troubleshoot a performance issue. The AI suggested an elegant solution that saved her four hours of work. She made her deadline, her manager was happy, and the customer issue got resolved.
Maya knows the code contains proprietary business logic. She also knows her performance review is coming up, and she can’t afford to miss targets. “Everyone’s doing it,” she says. “And honestly, I don’t think I could keep up without it.”
Story 2: James, DevOps Engineer
“The infrastructure won’t manage itself”
James runs infrastructure for a healthcare software company. His team of four manages cloud environments for applications serving 50,000+ medical professionals. When one person took a new job last month, the CEO asked if they “really needed to replace them right away.”
The answer, apparently, was no. James now handles deployment pipelines, monitoring systems, security patches, and incident response with three people instead of four. Leadership keeps sharing articles about “AI-driven DevOps” and asking when they’ll see those efficiency gains in practice.
The company is in the middle of a lengthy AI tool evaluation process. Legal is reviewing terms of service. Security is assessing data handling policies. Procurement is negotiating enterprise contracts. The timeline is “Q2 or Q3, maybe.”
Meanwhile, James is troubleshooting a complex networking issue that’s affecting application performance. He copies configuration files into his personal Claude account, asking for help analyzing routing tables and security group settings.
The AI spots a misconfiguration that would have taken James hours to find manually. The fix prevents what could have been a significant outage affecting patient care systems.
“I feel weird about it,” James admits. “But patients depend on these systems. If I can prevent downtime by getting help from AI, how is that not the right thing to do?”
The configuration files contain internal IP ranges, server names, and architectural details that reveal how the company’s entire infrastructure is organized. James doesn’t think much about it. He’s just trying to keep things running.
Story 3: Susan, Chief Financial Officer
“Everyone wants to see those productivity numbers”
Susan has been CFO for three years at a manufacturing company that makes specialized industrial equipment. The business is solid, but growth has plateaued. During quarterly board meetings, directors keep asking about AI initiatives and when they’ll start seeing productivity improvements in the numbers.
The CEO pulls her aside after the last board meeting. “Susan, we need to show progress here. McKinsey says companies are seeing 15–20% productivity gains from AI adoption. Why aren’t we?”
Susan knows the answer: their IT procurement process takes six months minimum for new software. Legal wants to review every AI tool’s terms of service. Security needs to assess data handling practices. HR is developing usage policies. The compliance team wants training programs.
But the pressure is real. Activist investors are asking tough questions. The stock price is flat. Employee satisfaction surveys show people feeling overworked and underresourced.
So Susan starts using AI for her own work. Financial models that used to take her team days to build now get done in hours. She uploads revenue data, cost structures, and competitive analysis into ChatGPT, asking for help with scenario planning and strategic recommendations.
The AI helps her prepare better board presentations, identify cost optimization opportunities, and model the impact of potential acquisitions. She’s more productive, her insights are sharper, and the board is impressed with her strategic thinking.
What Susan doesn’t fully process is that she’s uploaded detailed financial information that reveals exactly how profitable different product lines are, which customers generate the most margin, and what the company’s cost structure looks like compared to competitors.
“I’m not uploading anything that would violate our disclosure policies,” she tells herself. “Just internal planning documents to help me think through scenarios.”
The irony isn’t lost on her: she’s using AI to boost the very productivity numbers that the board wants to see, but through a process that technically violates the information security policies she helped establish.
The Common Thread
Maya, James, and Susan are good employees at good companies, making rational decisions under pressure. They’re not reckless or malicious. They’re productive, conscientious people trying to meet expectations with the tools available to them.
Each of them would probably say they’re being careful. Maya only uploads “small code snippets.” James only shares “configuration details, nothing sensitive.” Susan only works with “internal planning documents.”
But step back and look at what’s actually flowing to external AI systems: proprietary algorithms, infrastructure architectures, detailed financial data, customer information, strategic plans, and competitive intelligence.
None of them intended to share confidential information. They’re just trying to do their jobs well in an environment where AI tools are expected but not provided.
The Bigger Picture
This is happening at thousands of companies across every industry. Not because employees are careless, but because the pace of AI adoption is faster than the pace of corporate policy development.
Companies want the productivity gains that AI promises. Employees want to meet the performance expectations set for them. But the gap between “we need to leverage AI” and “here are the approved AI tools” is being filled by personal accounts and individual initiative.
The situation isn’t sustainable, but it’s also not necessarily catastrophic. The corporate world has navigated similar transitions before — the early internet, cloud computing, mobile devices, social media. There’s always a period where usage outpaces governance.
Most companies will eventually provide approved AI tools for their employees. Most AI companies will continue improving their enterprise data handling practices. Most employees will gladly switch to official tools once they’re available.
The question is how long this transition period lasts, and what happens to all the data that flows through personal AI accounts in the meantime.
What’s Next?
Maya’s company just announced they’re piloting GitHub Copilot for the engineering team. James’s organization is evaluating enterprise AI platforms that keep data within their own infrastructure. Susan’s board approved budget for “AI productivity tools” in the next fiscal year.
The solutions are coming. The policies are being written. The governance frameworks are being developed.
But for now, thousands of employees are making individual judgments about how to balance productivity expectations with information security concerns. Most of them are probably making reasonable decisions. Some of them probably aren’t.
It’s a uniquely modern dilemma: trying to be responsible while also trying to be competitive, in a world where the technology moves faster than the bureaucracy designed to govern it.
Everyone’s figuring it out as they go.
Note: The names and specific details in these stories are fictional, but they’re based on real conversations with employees across multiple industries. The situations described reflect common patterns in how individuals and organizations are navigating AI adoption in 2025.
Aaron Rose is a software engineer and technology writer.
Comments
Post a Comment