From USS Tennessee to AI Security: A Cybersecurity Journey

From USS Tennessee ISSM to AI security: how traditional cybersecurity expertise became both foundation and limitation for securing AI systems.

Learning

When I arrived on the USS Tennessee in 2018, I was stepping into a world of dated but stable technology: Windows 7 workstations, 5-inch floppy disks still in operational use, tape backups, and legacy network infrastructure that had been running reliably for years.

By the time I left, we had completed a comprehensive technology refresh: modern Windows operating systems, updated network enterprise, contemporary storage solutions, improved security posture. It was a textbook example of traditional IT security done right.

Here’s what surprised me most: the average end user barely noticed.

Sure, they had new computers. The interface looked different. But their day-to-day workflows? Largely unchanged. The difference between Windows 7 and Windows 10 from an operational perspective was incremental—same Office applications, similar file management, familiar processes.

This is how traditional IT has always worked: predictable update cycles, backward compatibility as a design principle, minimal user training because changes are gradual, straightforward security (scan-patch-scan).

Then I started working with AI systems.

Everything changed.

The Traditional Security Foundation

My cybersecurity career has been rooted in traditional InfoSec principles:

Information Systems Security Manager (ISSM) responsibilities:

  • Risk assessment and management
  • Security policy development and enforcement
  • Compliance monitoring (NIST, DoD STIGs)
  • Access control and least privilege implementation
  • Incident response planning and execution

CISSP perspective:

  • Security architecture and engineering
  • Asset security and data protection
  • Identity and access management
  • Security operations and monitoring
  • Software development security

Systems administration focus:

  • Log monitoring and analysis
  • Access control enforcement
  • Configuration management
  • Patch management
  • Backup and recovery

This foundation gave me a strong understanding of security frameworks, compliance requirements, and operational security. These skills remain valuable.

But they proved insufficient for AI security.

The Speed Problem: Evolution vs Revolution

The USS Tennessee technology refresh exemplified traditional IT evolution: predictable, manageable, incremental.

Traditional IT timeline:

  • Windows 7 (2009) → Windows 11 (2021): 12 years
  • Office 2010 → Office 2024: 14 years
  • Core workflows remained stable throughout
  • Users adapted gradually
  • Security controls evolved predictably

AI model timeline:

  • GPT-3 (2020) → GPT-5 (2024): 4 years
  • But practical capability jumps happened in months
  • Claude Skills.md, plugins, agent marketplaces: didn’t exist 6 months ago
  • Users must completely relearn workflows regularly
  • Security controls lag behind capabilities

It’s like watching the evolution from the first iPhone to the iPhone 16 happen overnight. Traditional security frameworks can’t keep pace.

Why I Started Working with AI

As a security professional, I realized I was facing a completely new threat landscape where my traditional expertise provided foundation but not solutions:

Traditional security frameworks didn’t apply:

  • No CVE database for prompt injection
  • No “patch” for architectural vulnerabilities
  • No established audit frameworks
  • No certification paths (CISSP doesn’t cover this)
  • No incident response playbooks

The field was evolving faster than education:

  • Academic programs lagged 2-3 years behind reality
  • Vendor training focused on using AI, not securing it
  • Security conferences had few AI-specific tracks
  • Most guidance was theoretical, not practical

I needed hands-on understanding:

  • You can’t secure what you don’t understand
  • Reading documentation isn’t sufficient
  • Theory without practice is incomplete
  • Breaking systems teaches more than protecting them

So I started building AI systems myself—not to deploy them, but to understand their vulnerabilities from the inside.

The Evolution into DevOps

Understanding AI security required skills I didn’t have as an ISSM/CISSP:

Infrastructure skills:

  • Docker containerization and orchestration
  • CI/CD pipeline design and security
  • Network configuration and monitoring
  • Secrets management at scale
  • Infrastructure as code (Terraform, Ansible)

Development workflows:

  • Git version control and branching strategies
  • GitHub Actions for automated testing
  • Code review processes
  • Dependency management
  • Application security testing

Why these matter for AI security:

  • Can’t assess AI deployment security without understanding containers
  • Can’t evaluate CI/CD risks without building pipelines yourself
  • Can’t design network security without hands-on configuration experience
  • Can’t audit agent systems without understanding how they interact with development workflows

Traditional security roles often separate “security” from “engineering.” This separation doesn’t work for AI security. You need both.

From Policy to Practice: The Gap

My background in policy roles taught me to think about risk, compliance, and governance. This remains valuable. But AI security showed me that policy without technical understanding is incomplete and often wrong.

Example: Prompt Injection Policy

Policy approach:

Policy 47.2: AI agents must not output credentials or sensitive data under any circumstances.

Enforcement: Security team will review AI outputs quarterly for compliance.

Reality:

  • Prompt injection can bypass any policy statement
  • Quarterly reviews are far too infrequent
  • Without output filtering, sandboxing, and monitoring, this policy is unenforceable
  • The policy writer clearly doesn’t understand how AI systems work

Technical approach:

# Implement actual controls
def query_ai_with_security(prompt, context):
    # 1. Input validation
    if contains_injection_patterns(prompt):
        log_security_event("Potential injection attempt")
        return sanitized_error_response()

    # 2. Output filtering
    response = ai_model.query(prompt, context)
    filtered_response = redact_sensitive_data(response)

    # 3. Sandboxing
    if response_attempts_code_execution(response):
        block_and_alert()

    # 4. Monitoring
    log_interaction(prompt, filtered_response, security_flags)

    return filtered_response

Policy informed by technical reality is effective. Policy written without understanding implementation is performative.

What’s Different About AI Security

From my ISSM/CISSP perspective, AI represents completely new territory that challenges fundamental assumptions:

Traditional IT SecurityAI Security
Scan-patch-scan methodology worksNo scanning for prompt injection; some vulnerabilities may be unsolvable
Update cycles measured in yearsModels change overnight; capabilities evolve monthly
Configuration files are deterministic (same input = same output)Natural language configuration is probabilistic (same input ≠ same output)
Vulnerabilities eventually get patchesSome architectural vulnerabilities may never be resolved
End users barely notice system upgradesUpgrades can break entire workflows; require organizational retraining
Vendor changes require months to adaptVendor changes happen instantly; no adaptation time
Security through stabilitySecurity through agility

This requires a fundamentally different security mindset—one that accepts uncertainty, designs for containment rather than prevention, and values rapid adaptability over long-term stability.

The Security Professional’s Advantage

Coming at AI from a security and systems administration perspective—rather than as a software developer—provides a distinct advantage:

Security professionals understand:

  • Defense in depth (no single control is sufficient)
  • Assume breach (design for compromise, not just prevention)
  • Least privilege (minimize access by default)
  • Monitoring and detection (you can’t prevent everything)
  • Incident response (how to contain damage when attacks succeed)

Systems administrators understand:

  • How to build resilient architectures
  • How to monitor for anomalous behavior
  • How to implement access controls at scale
  • How to maintain operational security under pressure
  • How to balance security with usability

These perspectives are critical for AI security because:

  • AI systems require security designed from the start, not added later
  • Prompt injection is unsolvable, so containment is mandatory
  • Model behavior changes require continuous monitoring
  • New vulnerability classes emerge constantly
  • Organizations must respond rapidly to vendor changes

Developers often optimize for functionality first, security second. Security professionals trained to think adversarially bring essential skepticism and risk awareness.

Lessons for Traditional Security Professionals

If you’re an ISSM, CISSP, CISM, or other traditional security professional looking at AI:

1. Your experience provides foundation, not full preparation

Traditional security knowledge is valuable—defense in depth, least privilege, monitoring, incident response all still apply. But they’re insufficient. AI requires new patterns built on traditional principles.

2. Hands-on experience is mandatory

You cannot effectively secure AI systems by reading documentation and writing policies. You must:

  • Build AI systems yourself (even small ones)
  • Try to break them through prompt injection
  • Understand how models actually work, not just conceptually
  • Experiment with different configurations and observe outcomes
  • Experience firsthand why traditional controls fail

3. Accept that you’re starting over in many ways

Your CISSP covered 8 domains thoroughly. None of them specifically address:

  • Prompt injection and jailbreaking
  • Model poisoning and backdoors
  • Adversarial machine learning
  • AI agent authentication
  • Probabilistic configuration management
  • Vendor model dependencies

You’re learning a new specialty, not just extending existing knowledge.

4. Agility becomes more important than stability

Traditional security values stability: locked-down systems, change control, predictable environments. AI security requires opposite mindset: rapid pivoting, continuous adaptation, designing for vendor switching.

This feels uncomfortable. Do it anyway.

5. Collaborate across disciplines

AI security requires security professionals, developers, data scientists, and operations teams working together. The traditional siloed approach fails.

Learn enough about each discipline to communicate effectively. You don’t need to be an expert developer, but you need to understand development workflows well enough to identify security gaps.

Where I Am Now: Hands-On AI Security

My journey from USS Tennessee to AI security has been one of continuous learning and fundamental rethinking:

What I’ve built:

  • Distributed AI agent system on Raspberry Pi cluster
  • Multi-model architecture (Claude, GPT-4, GLM)
  • Security monitoring for bias detection
  • CI/CD pipeline with security testing
  • Comprehensive logging and audit systems

What I’ve learned:

  • Traditional frameworks provide foundation but need significant adaptation
  • Hands-on experimentation teaches more than theoretical study
  • Policy without technical implementation is theater
  • Agility is a security control for AI systems
  • Vendor independence requires intentional architecture
  • Some vulnerabilities may never be solved; design for containment

What surprised me:

  • How quickly AI capabilities evolve (faster than I anticipated)
  • How subtle bias can be (requires statistical analysis to detect)
  • How SDKs can elevate lower-tier models (abstraction matters)
  • How unprepared most organizations are (including traditionally strong security programs)
  • How little formal training exists (everyone is figuring this out together)

The Path Forward

For security professionals making this transition:

1. Start building immediately

Don’t wait until you “fully understand.” Build something small. Break it. Learn from failure.

2. Focus on fundamentals, not specific models

Models change constantly. Understanding architectural patterns, threat vectors, and security controls matters more than knowing specific model capabilities.

3. Share what you learn

The field is so new that every practitioner’s experience adds value. Write about what works (and what doesn’t). Present at conferences. Help build community knowledge.

4. Embrace uncertainty

Traditional security often seeks definitive answers: “Is this secure?” AI security requires comfort with probabilistic thinking: “What’s the risk level? What’s our containment strategy?”

5. Stay current but don’t chase every trend

AI evolves weekly. You can’t learn everything. Focus on foundational concepts and practical implementation. Let others chase hype.

Conclusion: A New Discipline

The journey from USS Tennessee—securing stable, well-understood traditional IT systems—to AI security has been one of profound paradigm shifts.

Traditional security principles remain valuable:

  • Defense in depth
  • Least privilege
  • Monitoring and detection
  • Assume breach
  • Incident response

But they must be applied in fundamentally new ways to systems that:

  • Are probabilistic, not deterministic
  • Evolve monthly, not yearly
  • Have unsolvable vulnerabilities
  • Require vendor independence
  • Demand organizational agility

AI security isn’t traditional security plus AI. It’s a new discipline requiring new mindsets, new skills, and new frameworks built on traditional foundations.

For security professionals willing to embrace hands-on learning, accept uncertainty, and think differently about core assumptions, it’s an exciting frontier.

The field needs experienced security professionals who can bring adversarial thinking, risk assessment, and operational discipline to AI systems. But those professionals need to be willing to start learning again—not from zero, but from a new starting point where their traditional expertise is foundation, not solution.

That’s the journey I’m on. And if you’re reading this as a traditional security professional wondering if you should learn AI security: yes, you should. The field needs you. But come ready to be challenged, surprised, and forced to rethink fundamentals.

That’s what makes it interesting.