Here’s what keeps coming up for me about AI: we’re calling a lot of different things by the same name.

There are the tools themselves. ChatGPT, Claude, Gemini, Copilot. Image generators like Midjourney, DALL-E, Stable Diffusion. Voice cloning, video synthesis, coding assistants. A dozen categories of software, all lumped under one umbrella term.

But there’s another layer that I think matters just as much: the perspectives people bring when they talk about AI. How you feel about these tools depends a lot on where you’re standing. Whether you see opportunity or threat, convenience or risk, the future of work or the end of it. We’re not just using different tools; we’re coming at this from completely different places. And I think that’s shaping adoption, trust, and the conversations we’re having about what AI should and shouldn’t do.

So I want to try and break down both: the many things we’re calling AI, and the different lenses through which people see them. This list isn’t comprehensive, and I expect my thinking will keep shifting as this whole landscape does.

The Many Faces of AI

When someone says “AI” to me now, the first thing I’m thinking is: which one?

There’s search-replacement AI, whether that’s ChatGPT, Claude, Gemini, or the AI summaries that now appear at the top of Google results. A lot of people use these the same way they used to use search engines: ask a question, get an answer. The convenience is undeniable. But so is the confidence problem. These tools can be wrong, cite sources that don’t say what they claim, or just make things up entirely. They deliver answers with the same tone whether they’re right or completely off. That’s a strange new dynamic to navigate.

There’s boardroom buzzword AI, the one that executives drop into quarterly reports. It’s a checkbox. A signal that the company is “innovative.” Sometimes there’s real substance behind it, and sometimes it’s more about optics than implementation. I’ve seen both.

There’s coding AI, helping developers write software faster and helping people with no programming background build things they couldn’t have built before. “Vibe coding” is what some people call it: describing what you want and letting the AI figure out the implementation. It’s genuinely cool to see people bring ideas to life that would have stayed stuck in their heads otherwise. But it comes with responsibility: we have to stay mindful of where any unreviewed AI-generated code ends up running in production.

There’s job-fear AI. The version that has people wondering whether their job will exist in five years. Writing, coding, design, analysis, customer service. The automation question isn’t new, but AI has expanded the scope of what might be automated. I’ve worked in and around creative industries for most of my career, and I watch friends and former colleagues grapple with this one. When your livelihood feels threatened, you don’t want nuance. You want clarity. The people who are cautious about these tools aren’t wrong to be. They’re protecting something real. And honestly? AI might be coming for my work next. I don’t think anyone is immune from that question right now.

There’s slop-generating AI, pumping out endless content that nobody asked for. Low-quality articles, spam images, fake reviews. The internet already had a noise problem, and some of these tools are making it worse. Then there’s the content people think is real but isn’t. AI-generated entertainment is one thing. But fake videos, fabricated quotes, and voice clones designed to manipulate? That’s something else entirely.

There’s plagiarism and copyright AI, trained on other people’s work without clear permission or compensation. Artists, writers, and creators have legitimate frustrations here. The legal and ethical questions are far from settled, and I don’t think we should pretend they are.

There’s environmental-concern AI. These systems can consume significant amounts of power, and depending on the facility and location, water for cooling. That’s not fear-mongering; the resource demands are real. But it’s also more nuanced than some headlines suggest. Some data centers are moving to zero-water cooling systems, while others in water-stressed regions are drawing millions of gallons daily. Whether the tradeoffs are worth it probably depends on what’s being accomplished with that compute.

There’s creepy surveillance AI, feeding facial recognition systems, risk-scoring algorithms, behavioral tracking, and data harvesting that gets packaged and sold. These tools power monitoring, manipulation, and control, often without the knowledge of the people being watched, scored, or profiled. Privacy erosion at scale, and once it’s gone, it doesn’t come back easily.

There’s medical and research AI, the kind that helps scientists analyze data, discover drugs, identify diseases earlier, and push forward in ways that would take humans decades longer. When AI helps catch a cancer earlier or accelerates vaccine development, that’s the version of this technology I want to see more of.

There’s security defense AI, and this one hits close to home for me. These tools can help security teams detect threats faster, triage alerts that would otherwise pile up, and give smaller teams capabilities that used to require much larger headcounts. It’s not a silver bullet, but it can genuinely make a difference for people trying to protect systems and data.

There’s cybercrime AI, the kind that helps bad actors craft more convincing phishing emails, find vulnerabilities faster, and automate attacks that used to take longer and in some cases more expertise. Working in cybersecurity, I see this one up close. The same capabilities that help defenders also help attackers, and that arms race is accelerating.

And then there’s the one I actually use every day: practical workflow AI. The tools that help me get more done, think through problems, and extend what I’m capable of.

And all of this is to say nothing about the machine learning that’s been quietly running in the background for years: Siri and Alexa, autocorrect, spam filters, photo face recognition, recommendation algorithms, fraud detection, GPS route optimization, robovacuums. We’ve been living with “AI” for a long time without the cultural weight the term carries now. What I’m talking about here is really the current moment, the strange dichotomy the word has taken on in our society.

Where I’ve Landed (For Now)

I’m using AI to improve my workflows, increase what I can accomplish, and do more for my clients. It’s not an abstract concept for me anymore. It’s integrated into how I work.

In practice, that looks like a few different things. I’ve always been big on communication: presentations, videos, reports for clients. That work takes time, and I used to do less of it because each piece was a production. Now I can take the data and ideas I’m working with and turn them into polished deliverables much faster. Three or four times as much output in the same window.

It helps me manage large implementation projects for clients, tracking the dozens of moving pieces that come with any serious technical engagement. It helps me build tools, including ones I can send to a 3D printer. It helps me build and manage the self-hosted services I run in my own lab. It connects into my calendar, my task lists, my documents, and helps me stay on top of a growing number of coding projects and automations.

And it’s let me tiptoe into areas I’m still new to. I’ve started experimenting with custom hardware, building small devices that integrate with my automation systems. I’m not a hardware hacker by background, but having something that can help me bridge the gap between what I know and what I’m trying to learn has made that kind of experimentation feel possible.

The way I use it keeps shifting. A year ago it was mostly standalone conversations. Now it’s more embedded, connected to the systems I already use, doing real work alongside me.

And even within the tools themselves, I’m not locked into one. Right now I’m using Claude Code with a lot of customization layered on top. But over the past few years I’ve moved between Claude in the browser, ChatGPT, local LLMs, various image generators, different voice tools. My approach has been to build my workflows so they can sit on top of whichever tool is giving me the best results at any given time. None of this is static, and neither is my setup.

And here’s the weird part: when I’m actually using it, I stop thinking about it as “AI.” It’s just a thing that can do work. I hand tasks to something that understands what I’m trying to accomplish. It becomes another way to build processes and applications, kind of independent of all the hype and fear around the term itself.

Daniel Miessler , whose work I’ve been following closely for a while, puts it this way:

“The main practical theme of what I look to do with a system like this is to augment myself. Like, massively, with insane capabilities. Think Tony Stark stuff, no joke. Minus the flying. It’s about doing the things that you wish you could do that you never could do before, like having a team of 1,000 or 10,000 people working for you on your own personal and business goals.”

That really resonates with me. Not because I think AI is magic or harmless, but because I’ve experienced what it feels like to have capabilities I didn’t have before. To build things I couldn’t have built alone. To work on problems that would have taken me weeks.

The Uncertainty

AI and all of this related technology is reshaping the economy and work, and likely will continue to. We’re going to see huge changes over the coming years with what these agents are capable of, the automations they enable, and the way they integrate into our devices and daily work.

A lot depends on how smart these systems get, how fast they get there, and what we decide to give them access to. And by “them” I mean all of them, because there isn’t just one AI making these decisions. There are thousands of systems, developed by different companies, with different goals, being deployed in different contexts. The future isn’t one story. It’s a thousand stories playing out at once.

I’m more hopeful than anything, honestly. It’s just a matter of how things shake out, and how well (or how badly) the big tech companies, governments, and all of us handle what’s being built.

And who knows? We may not even have access to these tools in the same way a year from now. The costs, availability, and capabilities could shift completely. But for the time being, I’m really enjoying using what’s available to do better work.