AI Will Not Take Your Job, But It Might Take Your Ability to Think
The real AI risks are not about capability. They are critical thinking erosion, organizational inertia, and a misunderstanding of both AI and security.
Everyone is having the wrong conversation about AI.
Scroll through any tech forum, LinkedIn feed, or Discord server and you will find the same debate: which model is better? Is GPT-5.4 smarter than Opus 4.6? Did the latest benchmark prove anything? Who won the coding eval?
The models have gotten better. That matters. But what actually changed what I can build day to day was the tooling wrapped around the models. Harnesses like Claude Code and Codex CLI give AI the ability to read files, run commands, recover from errors, and iterate on real projects. Better models combined with better harnesses is what made AI tools genuinely useful for infrastructure and security work. The models alone, accessed through a chatbot interface, were never enough.
But the capability question is not what keeps me up at night. What concerns me is how humans and organizations are responding to these tools — and how badly most people misunderstand both AI and the security risks they claim to worry about.
This is an opinion piece, based on my experience as someone with a cybersecurity and systems administration background — not a software developer — who uses AI coding tools daily to build, test, and secure infrastructure. I am going to make arguments about history, about risk, about organizational failure, and about how most people misunderstand both AI and cybersecurity. You may disagree with parts of it. I hope you do, because that means you are thinking about it.
TL;DR
- Yes, AI will displace specific categories of jobs. That is real and happening now. But the bigger long-term risk is the erosion of critical thinking and problem-solving skills, especially among junior workers and students who never get the chance to develop them.
- People are already forming emotional attachments to AI chatbots. That erodes the critical distance these tools require and makes users less likely to question bad outputs.
- Intelligence without the ability to apply it is historically useless. AI follows the same pattern as agriculture, writing, the printing press, and the internet: each expanded access to knowledge while reducing the pressure to develop the skills to use it.
- Large organizations will repeat the dot-com era pattern of failing to adapt, not because they are stupid, but because their structure makes agility impossible.
- “AI is too insecure to use” is a statement about misunderstanding cybersecurity, not about AI. Cybersecurity is risk management, and policies are only as useful as your ability to enforce them.
- The advancement people attribute to smarter models is largely driven by better tooling. AI coding harnesses like Claude Code and Codex CLI matter more than model benchmarks for real-world capability.
Where I Am Coming From: Harnesses, Not Benchmarks
Before I get into the risks, I need to explain why I think AI capability is accelerating faster than most people realize. That context matters for the arguments that follow.
I use AI coding tools — Claude Code, Codex CLI, and others — to build and test infrastructure. I used to run Docker Swarm clusters for security research. I no longer do, because AI tools now let me spin up, test, and tear down Docker networks in minutes. Swarm’s overhead slowed that cycle down. When the goal is rapid testing and experimentation, simpler is faster.
That shift happened not because the models got dramatically smarter overnight, but because the harnesses around them matured. Claude Code and Codex CLI are harnesses. They are not the AI model. They are the orchestration layer that gives the model file system access, the ability to run commands, context management across long sessions, and the ability to try something, see the result, and adjust. The model provides the reasoning. The harness provides the ability to act on that reasoning.
Think of it like this: someone with a PhD in network engineering knows everything about routing and firewalls. But drop them in a datacenter with no console access, no documentation, and no tools, and their knowledge is useless. Give them SSH access and a network diagram, and suddenly they can redesign your infrastructure. The PhD is the model. The SSH access is the harness. The PhD did not get smarter when you gave them tools. They became able to apply their intelligence.
The AI is not the agent. The harness is the agent. In my workflow, Claude Code is the stronger harness right now. Its context management, tool integration, and session persistence are more mature. Codex CLI is catching up and excels at targeted problem-solving. I use them together: Claude Code with Opus 4.6 for planning and architecture, Codex CLI with GPT-5.4 for targeted fixes. Both harnesses can now delegate subtasks to smaller, cheaper models as subagents, which reduces token waste and keeps the main agent’s context clean.
I dropped Gemini CLI due to persistent API errors — reliability matters more than raw capability. I also switched from ChatGPT to Raycast AI for chat, because Raycast gives me multi-model API access with custom system prompts, essentially letting me build specialized agents for specific tasks without writing code. Raycast is now available on Windows too.
For the technical details, see Building a Multi-Model AI System and How to Run Claude Code, Codex, and Gemini as Containerized Homelab Services.
That is the context. The tools are real, they work, and they are improving fast. Now let me talk about what actually worries me.
Intelligence Without Application Is Useless: A Historical Argument
I do not believe the talking heads who say AI is going to take over and nobody will have a job. Not because AI is not powerful — it is. But because intelligence, on its own, has never been the thing that mattered.
Someone with a PhD has extraordinary knowledge in their field. But a PhD without the ability to apply that knowledge — without practical skills, without problem-solving ability, without the judgment to know when and how to use what they know — is useless in the real world. The degree is proof of intelligence. The career is proof of the ability to apply it. These are not the same thing, and throughout human history, it has always been the application that created value.
The Pattern
Every major knowledge-access revolution follows the same pattern. A new technology expands who can access knowledge. A small percentage of people use that access to build, innovate, and solve problems. The rest benefit passively. And in every case, the increased access to knowledge also reduced the pressure to develop the skills needed to apply it.
Agriculture. Before agriculture, every human had to be a problem-solver. You hunted, foraged, built shelter, navigated terrain, and adapted to threats daily. Survival required constant application of knowledge. Agriculture changed that. By enabling stable food production, it freed people from subsistence and created the conditions for specialization — writing, engineering, medicine, governance. But it also meant that most people no longer needed to develop the full range of survival skills. The level of effort required for a comfortable life dropped.
Writing. Before writing, knowledge existed only in the minds of the people who held it. Oral traditions could transmit knowledge, but they were fragile — one generation’s failure to pass it on meant it was lost. Writing made knowledge persistent and transferable. It enabled law, science, and complex governance. But it also meant that fewer people needed to develop the memory, storytelling, and oral reasoning skills that pre-literate societies depended on.
The printing press. Before the printing press, written knowledge was scarce and controlled. Books were hand-copied, expensive, and locked away in monasteries and universities. The printing press democratized access to written knowledge. It fueled the Reformation, the Scientific Revolution, and the Enlightenment. It also meant that the skills required to access knowledge — the ability to travel to a library, to read Latin, to gain entry to an institution — became less important. More people could access knowledge. Fewer people needed to work hard to get it.
The internet. The internet made virtually all human knowledge instantly accessible to anyone with a connection. It enabled self-taught engineers, remote education, open-source collaboration, and the democratization of expertise. It also created the conditions for passive consumption, information overload, and the atrophy of deep research skills. When the answer to any question is a search away, the ability to reason through a problem from first principles becomes a skill that fewer people practice.
The Double Edge
Here is the part that matters for the AI conversation: every one of these revolutions both enabled more people to reach their potential and enabled more people to never reach it.
Each technology lowered the floor — the minimum effort required for a decent life. That is genuinely good. Fewer people starving, more people educated, more people with access to opportunity. But it also meant that the pressure to develop the skills that drive innovation — problem-solving, critical thinking, the ability to struggle through hard problems — decreased for the majority.
The small percentage of people who could both access and apply the new knowledge drove disproportionate innovation each time. The farmers who became engineers. The literate monks who became scientists. The internet-connected students who became open-source contributors. The value came from application, not access.
AI Follows the Same Pattern
AI is the next step in this progression. It makes the application of knowledge more accessible than ever before. You do not need to know Docker networking to deploy a container stack — an AI tool can generate it for you. You do not need to memorize NIST frameworks to write a security policy — an AI tool can draft one. You do not need years of programming experience to build a functional application — an AI tool can write the code.
That is powerful. It means more people can build, deploy, and create than ever before.
But it also means more people can get results without developing the underlying skills. And that is where the real risk lives.
In short: Intelligence has never been the bottleneck. The ability to apply it has. Every knowledge-access revolution expanded who could access knowledge while reducing the pressure to develop application skills. AI follows the same pattern — and the consequences of that pattern are what we should be worried about.
The Real Risks: Job Loss, Critical Thinking Erosion, and Emotional Dependency
The title of this piece says AI will not take your job. Let me be honest about where that claim holds and where it does not. Specific categories of knowledge work — tasks that are repetitive, pattern-based, and well-defined — are already being automated or reduced. That displacement is real and happening now. But “your job” in the title is not about those tasks. It is about the ability to think, adapt, and solve problems. AI cannot replace that. The danger is that we stop developing it.
What I want to focus on are the longer-term risks that I think are more dangerous precisely because they are less obvious.
The Critical Thinking Pipeline
Junior and entry-level positions are not just about getting work done. They are where humans develop the skills that make them valuable throughout their careers.
Think about what actually happens in a junior role. You get a task you do not fully understand. You struggle with it. You break things. You learn to read error messages, trace problems through systems, and ask the right questions. You develop communication skills by explaining what you did and why. You build judgment by making mistakes and seeing the consequences.
Everything leading up to and including your time in a junior position is really about developing two capabilities: critical thinking and soft skills like communication, collaboration, and judgment. The technical content of the job — the specific programming language, the particular framework, the vendor-specific tooling — is really just the medium through which those capabilities are developed. It is the gym equipment, not the fitness.
Here is the problem: AI is automating exactly those junior-level tasks. If companies replace junior developers with AI tools, or reduce the number of junior positions because AI makes senior developers more productive, the pipeline that produces experienced practitioners dries up. You cannot develop problem-solving skills by watching an AI solve problems. You develop them by struggling through problems yourself.
This is compounded by the increasing use of AI in schools during kids’ most formative years. When students can get an AI to write their essay, solve their math problem, or debug their code, they are not just skipping homework. They are skipping the cognitive struggle that builds the neural pathways for independent thought.
I am not making a Luddite argument. AI tools in education can be powerful when used correctly — as tutors, as thinking partners, as tools that help students work through problems rather than bypass them. But that requires intentional pedagogy, and what I am seeing instead is students using AI as a shortcut that removes the need to think at all.
The result, over a generation, could be a significant shortage of people with the ability to problem-solve and think critically. Not because people are less intelligent, but because they never had to develop the skills.
Emotional Dependency
There is another risk that sounds minor but is not. People are already building emotional attachments to AI chatbots.
This goes beyond finding a chatbot useful. People are developing trust in and emotional connections with systems that look and feel as if they are thinking and talking — but are not. They are language models producing statistically likely responses.
We have already seen what happens when this goes wrong. When Replika removed its romantic companion features in early 2023, users reported grief, anxiety, and a sense of loss comparable to a breakup — over a chatbot. Character.AI has faced lawsuits after teenagers developed intense emotional bonds with AI personas. Parents alleged their children became isolated from real relationships. These are not edge cases. They are early signals of a pattern that will scale as the interfaces get more convincing.
When the chatbot works well, users feel understood. When it malfunctions, produces incorrect information, or changes its behavior after an update, users feel betrayal — as if a trusted partner let them down. That emotional response erodes exactly the critical distance these tools require. People who trust an AI system emotionally are less likely to question its outputs. They are more likely to accept incorrect information. They are more likely to make decisions based on AI recommendations without independent verification.
In short: Job displacement is the obvious risk. The deeper risks are the erosion of the critical thinking pipeline (junior roles disappearing, students bypassing cognitive struggle) and emotional dependency on systems that simulate understanding but do not have it. People who trust AI emotionally stop questioning its outputs, which is the opposite of how these tools should be used.
Organizational Inertia and the Dot-Com Parallel
The dot-com era taught us something important about how organizations respond to technological change, and we are about to learn the same lesson again.
Companies Are Like People
When a company is young, it has nothing to lose. No legacy systems. No compliance frameworks. No customers depending on stability. No shareholders demanding predictable returns. Young companies take risks because risk is the only path to growth when you have nothing.
As a company succeeds, it accumulates responsibilities. Employees who depend on steady paychecks. Customers who depend on stable products. Contracts with SLAs and penalties. Regulatory obligations. Board members who want consistent quarterly performance.
Each of these responsibilities is an anchor against change. Not because change is bad, but because change introduces uncertainty, and uncertainty threatens everything the organization has built. This is the same pattern you see in individuals: when you are 22 with no mortgage and no kids, you will take a job at a startup. When you are 42 with a family and a retirement plan, you take the stable corporate position. It is rational risk management.
The problem is that technological change does not care about your risk profile.
Steering a Ship
Large organizations trying to adopt AI are like trying to turn an aircraft carrier. The captain sees the iceberg. The navigation team plots the new course. But the ship has so much momentum in its current direction that the turn takes miles. A speedboat sees the same iceberg and turns in seconds.
This is not a metaphor about intelligence. The people running large organizations are often brilliant. They understand AI. They see the potential. They read the same articles and attend the same conferences. The constraint is structural, not intellectual.
Changing a large organization’s AI strategy means:
- Retraining thousands of employees
- Renegotiating vendor contracts
- Updating compliance and security frameworks
- Modifying hiring practices and role definitions
- Rebuilding CI/CD pipelines and development workflows
- Managing the politics of teams whose work is being automated
Each of these is a multi-month initiative. Together, they take years. Meanwhile, a five-person startup with AI coding tools is shipping features in days that used to take weeks.
The Attribution Error
There is a subtler problem that compounds organizational inertia. The leaders of successful organizations tend to attribute their success to their specific knowledge and the processes they built. “We got here because of our expertise in X and our disciplined approach to Y.”
But in most cases, what actually got them there was their problem-solving ability and their willingness to find and apply new knowledge at the time when they were building the organization. The knowledge and processes were outputs of that ability, not the cause of success.
When AI arrives and threatens to make those specific processes obsolete, the observable response is predictable: leaders double down on what they know. They invest more in the existing stack. They tighten the existing processes. They frame AI adoption as a risk to be managed rather than a capability to be deployed. You can watch this happen in real time at any large enterprise — the committees, the pilot programs that never scale, the security reviews that become indefinite holding patterns.
The same problem-solving ability that made them successful the first time is exactly what they need to apply now. But the processes and incentive structures they built around their success actively prevent them from doing it.
The Prediction
Companies that fail to adopt AI tooling will not fail dramatically. They will not go bankrupt overnight. They will slowly become less competitive — and the gap will widen faster than they expect.
A team of five engineers using AI coding harnesses can ship features, write tests, and iterate on infrastructure at a pace that used to require twenty. That is not a marginal improvement. It is a structural cost and speed advantage that compounds over time. The company using AI tools is not just faster today. It is learning faster, iterating faster, and accumulating competitive advantage with every sprint. The company that banned AI tools to “reduce risk” is paying five times the labor cost for the same output — and losing its best engineers to organizations that let them work with better tools.
This is not hypothetical. It is the same dynamic that played out in the dot-com era. The companies that failed to adopt internet technology did not explode. They faded. Blockbuster did not close all its stores in one day. It just became incrementally less relevant until it was gone. The difference now is that the timeline is compressed. The internet transition played out over a decade. AI tooling is moving faster because the tools themselves accelerate their own adoption — teams that use AI to build AI integrations create a feedback loop that non-adopters cannot match.
In short: Large organizations face structural barriers to AI adoption — not intellectual ones. The same risk aversion that made them stable makes them slow. The leaders who built those organizations often mistake their specific knowledge for the problem-solving ability that actually drove their success. The result will be a slow fade, not a dramatic collapse.
The Misunderstanding Problem: Why People Get AI Wrong
More and more often, I overhear people dismissing AI entirely — insisting it is useless, that they are better than it, that they refuse to let it think for them.
The instinct is right. The conclusion is wrong. And it comes from a fundamental misunderstanding of what AI is and how it should be used.
The Chatbot Problem
The majority of people have been exposed to AI through one lens: chatbots. ChatGPT, Gemini’s web interface, Copilot in Bing. These are conversational interfaces that generate text responses to text prompts.
Chatbots are the least powerful application of modern AI. They are impressive as a technology demonstration, but they are limited by design. They cannot read your files. They cannot run your code. They cannot interact with your infrastructure. They have no persistent memory of your project. They lose context after a certain number of messages. They are, fundamentally, a very sophisticated autocomplete engine wrapped in a chat interface.
When someone says “AI sucks” because ChatGPT gave them a wrong answer, or wrote a bad essay, or could not solve their specific coding problem — they are making a reasonable judgment based on their experience. The problem is that their experience represents maybe 10% of what AI tooling can actually do.
The Right Instinct, Wrong Conclusion
The people who say “I don’t let AI think for me” have exactly the right instinct. They should not let AI think for them. AI is a tool, not a replacement for judgment.
But the conclusion they draw — “therefore I should not use AI” — is wrong. You do not refuse to use a calculator because you do not let it think for you. You do not refuse to use a search engine because you do not let it research for you. You use these tools to augment your own capability while maintaining your own judgment about the results.
The gap between “AI as chatbot” and “AI as development tool” is enormous. Claude Code reading your entire codebase, running tests, iterating on solutions, and deploying infrastructure is a fundamentally different experience from asking ChatGPT to explain a concept. Most people have never experienced the former. They are making sweeping judgments about AI based on the weakest implementation of the technology.
Not Their Fault
I want to be clear: this is not a criticism of the people who hold these views. The media and marketing around AI focuses almost entirely on the chatbot experience. The viral demos are chatbot demos. The mainstream coverage is about chatbots writing essays and passing bar exams. The AI safety discourse is about chatbot alignment.
Very few people outside the developer community have seen what happens when you give a capable model access to a real codebase through a well-designed harness. Very few people have experienced the difference between “ask AI a question” and “work alongside AI on a complex project.”
Until that gap closes, the misunderstanding will persist. And as long as it persists, people will either over-trust chatbots (leading to the emotional dependency problem) or under-utilize AI tools (leaving real capability on the table).
In short: Most people judge AI based on chatbot interactions, which are the weakest application of the technology. Their instinct to maintain independent judgment is correct. Their conclusion that AI is therefore useless is wrong. The gap between chatbots and AI development tools is enormous, and most people have only experienced one side of it.
Security Is Risk Management, Not a Reason to Avoid AI
I also hear the argument that AI is too insecure to be used. This one hits close to home because I have spent my career in cybersecurity, and I believe this argument reveals a fundamental misunderstanding — not of AI, but of what cybersecurity actually is.
There Is No Such Thing as “Cybersecurity”
That is a deliberately provocative statement, so let me explain what I mean.
What most people call “cybersecurity” is actually risk management. There is no such thing as a perfectly secure system. Every technology, every process, every human introduces risk. The goal of cybersecurity is not to eliminate risk — that is impossible. The goal is to identify risks, assess their likelihood and impact, and manage them to an acceptable level given your organization’s mission and resources.
When someone says “AI is too insecure to use,” they are implicitly claiming that the risks of AI cannot be managed. That is almost never true. What is true is that the risks are different from traditional IT risks and require different management approaches — which I have written about in AI Security Challenges We Are Not Ready For and Ten Lessons from Building an AI Agent Security Lab.
The CVE Hype Problem
Part of the “AI is insecure” perception comes from hype around security vulnerabilities and CVEs. Every time an AI-related vulnerability is published, it generates breathless coverage about how AI is dangerous and cannot be trusted.
But CVEs exist for every technology. Linux has CVEs. Windows has CVEs. Your database has CVEs. Your web framework has CVEs. The existence of vulnerabilities is not an argument against using a technology. It is the baseline condition of all technology. The question is whether you can manage the vulnerabilities, patch them, mitigate them, and maintain an acceptable risk posture. The same discipline that secures your web server secures your AI deployment.
Policies Are Only as Useful as Enforcement
There is a corollary principle that applies equally to AI security and to cybersecurity in general: policies, instructions, and SOPs are only as useful as your ability to enforce them. Without enforcement, they are worthless.
This is not an abstract point. Let me give you a concrete example that every cybersecurity professional knows.
The Password Policy Example
For years, the most common password policy in enterprise environments followed this pattern:
- Minimum 14 characters
- Must include uppercase, lowercase, special characters, and numbers
- Must be changed every 60 to 90 days
- Cannot reuse previous passwords
- Enforced through Active Directory or equivalent
In theory, this created strong passwords that were difficult to crack. Complex character requirements expanded the keyspace. Regular rotation limited the window of exposure if a password was compromised. Reuse prevention ensured old compromised passwords could not be recycled.
In practice, it made passwords easier to crack.
Why? Because users will always find a way to satisfy the policy with the minimum cognitive effort. Complex passwords that change every 90 days and cannot be reused are impossible to remember. So users developed patterns:
- Keyboard walks:
qwerty!@#$%^satisfies every complexity requirement - Geographic patterns:
Dallas2026Spring!— location, year, season, special character - Sports teams:
Cowboys#2026!— favorite team, symbol, year - Sequential mutations:
Password1!becomesPassword2!becomesPassword3!
Password cracking tools know these patterns. Hashcat rule sets specifically target keyboard walks, geographic patterns, and seasonal mutations. A password that technically meets every complexity requirement can be cracked faster than a simpler passphrase because the patterns are so predictable.
This is why NIST updated their guidance in Special Publication 800-63B. The revision recommended:
- Longer passphrases over complex character requirements
- Removing mandatory periodic rotation
- Screening passwords against known compromised password lists
- Eliminating composition rules (uppercase, special character, etc.)
The old policy was more restrictive. The new policy produces better security outcomes. Because security is not about the strength of the rule. It is about whether the rule produces the behavior you need.
The AI Parallel
The same logic applies to AI policy, and the consequences of getting it wrong are more dangerous than most security teams realize.
This is not a new problem. It is the shadow IT problem that every security team has been fighting for over a decade, replaying with higher stakes.
When organizations were slow to adopt cloud services, employees signed up for Dropbox, Google Drive, and personal AWS accounts to get their work done. When IT locked down communication tools, teams spun up unauthorized Slack workspaces and WhatsApp groups. When procurement took six months to approve a SaaS vendor, departments put it on a corporate card and figured out compliance later. Every time, the organization’s security posture got worse. Not because the employees were malicious, but because the approved path was too slow or too restrictive to let them do their jobs.
AI is following the exact same pattern, except the data exposure risk is orders of magnitude higher. A rogue Dropbox account might leak a few files. A developer pasting proprietary source code into an unvetted AI chatbot is potentially feeding your intellectual property into a training pipeline you do not control.
Employees who need AI tools to be productive will use them whether you approve it or not. They will use personal accounts, shadow IT instances, and unauthorized APIs. They will paste proprietary code into free-tier chatbots with no data retention guarantees. They will route sensitive queries through services whose terms of service explicitly allow training on user inputs. Every one of these shadow AI interactions is harder to monitor, audit, and control than an officially sanctioned deployment with proper data handling policies.
And here is the part that should alarm every CISO: the barrier to capable AI is collapsing. Chinese models like GLM-4 are free, capable, and improving rapidly. Open-weight models distilled from frontier systems like Opus and GPT-5 are already closing the gap. DeepSeek demonstrated that distillation from larger models can produce surprisingly capable smaller models at a fraction of the cost. From what I can see, the economics of AI access are trending toward near-zero, and that trend is accelerating.
That means your employees will not just have access to one or two shadow AI tools. They will have access to dozens — free, capable models running locally or through unmonitored APIs, with no logging, no data loss prevention, and no visibility for your security team. Banning the sanctioned tools does not reduce AI usage. It eliminates your ability to see it.
An overly restrictive AI policy is the password policy of 2026. It looks secure on paper. It produces worse security outcomes in practice because it ignores human behavior and the reality of what is freely available.
The better path is to enable AI use rather than fight it. A sanctioned AI deployment with proper guardrails — data classification policies, approved model lists, audit logging, and clear acceptable use guidelines — gives you two things a ban never will: visibility into how AI is being used and control over what data flows through it. You can enforce DLP rules on an approved tool. You cannot enforce anything on a developer’s personal ChatGPT account.
This is also where the security argument and the competitiveness argument converge. As I discussed in the organizational inertia section, companies that do not adopt AI tooling are already falling behind competitors who do. A security team that bans AI tools is not just creating shadow IT risk — it is actively handicapping the organization’s ability to compete. The security team exists to help the organization operate safely, not to stop it from operating at all. An AI policy that makes the organization less competitive while pushing usage underground is failing on both counts.
The correct approach is the same approach NIST took with passwords: understand how people actually use the technology, design controls that work with human behavior rather than against it, and focus on monitoring and enforcement rather than blanket prohibition.
For a deeper dive into how traditional security frameworks struggle with AI-specific risks, see Why AI Security Broke Traditional InfoSec Playbooks.
In short: Cybersecurity is risk management, not risk elimination. Banning AI tools does not stop usage — it pushes it into shadow IT that is invisible to your security team, just like every previous wave of shadow IT. Free, capable models distilled from frontier systems are making this worse every month. Enabling AI with proper guardrails is both more secure and more competitive than prohibition. The NIST password guidance shift is the precedent: better security comes from working with human behavior, not against it.
Conclusion
The real risks of AI are not about AI being too smart or too capable. They are about humans.
Junior roles disappearing before people develop problem-solving skills. Students outsourcing thinking during their most formative years. People forming emotional bonds with systems that simulate understanding, eroding the critical distance that safe AI use requires. Organizations frozen by structural inertia. People misjudging the technology based on limited exposure to chatbots. Security teams treating risk management as prohibition.
These are human problems, not technology problems. And they are compounded by the fact that AI capability — driven by better harnesses, not just better models — is accelerating faster than most people expect. The gap between what AI tools can do and what humans are prepared to handle is widening.
The people who will navigate this well are the ones who:
- Use AI as a tool while maintaining their own judgment and problem-solving skills
- Recognize that their value comes from their ability to think and adapt, not from their specific knowledge
- Understand that cybersecurity is risk management, not risk avoidance
- Stay curious enough to understand what these tools actually are, rather than dismissing them based on chatbot experiences
The organizations that will navigate this well are the ones that can stay agile enough to adopt new tooling without abandoning the problem-solving culture that made them successful in the first place.
I do not have all the answers. But I have been watching this space closely, building with these tools daily, and thinking about the implications. These are the patterns I see.
What is your experience? Is your organization adapting, or stuck in the ship-turning problem? Are you seeing junior roles disappear in your field? I want to hear from people who are in the trenches with this, not just reading about it.