
Tech buzzwords are so snazzy, aren’t they.

Tech buzzwords are so snazzy! Move fast and break things, machine learning, internet of things, big data, blockchain, pivot to video! All of them are evocative and creative and have lead to me day drinking! As an old , I’ve seen a lot of these fads come and go. Some are great and some are devastating.
The latest newcomer to this effervescent list is AI First!
It means with every task an employee performs they should first ask if AI can do it instead.
It’s no surprise there’s a lot of doom and gloom over AI. Some of it is warranted. Very warranted. The entry level jobs as they are today and remedial work will probably vanish and that’s bad for young people. The point is to handle the jobs that an intern or junior level would do. But catastrophic job loss predictions have come and gone and by and large we’ve adapted. There will be new entry level jobs. Will they be harder? Maybe. Will they require new and different skills? Definitely. Whatever it is, it will be disruptive. Thankfully, in America, we have high minded and thoughtful politicians that are future focused!
[Long sigh]
My prediction
It will likely be disruptive and painful in the nearterm but eventually a new normal will be achieved. Technology, like water, seeks it’s own level. My advice is to brace for impact. Life offers no guarantees. Government intervention may seem alluring but with that comes tradeoffs and unintended consequences. Besides I’m not at all convinced our leaders are up to the task much less willing to address the issue.
Plainly, I think AI is overrated. Like any tool, it’s great at some things while terrible at others. I think we can gain wisdom from the hoodie. When approaching a problem, Bill Belichick doesn’t have a rigid method. He relied on a dynamic system of adaptability and fluidity. The gist is focusing on your strengths and weaknesses; lean into strengths and augment weaknesses. Rigid systems can be strong but they’re brittle. This is why that era of the pats preferred smarter players.
AI’s Strengths
- Efficiency; Automation of relative tasks
- Rapid iteration and testing
- Gathering and synthesizing general information
- Image generation and enhancement
- Research around subjects with encyclopedic databases
- Guessing based on very little information
- Pattern recognition
Weaknesses
- Predicting the future
- Gathering and synthesizing advanced information
- Problem solving. Reasoning and logic
- Bias
- Efficiency
- Creativity
- Understanding context
- Chesterton’s fence
Nice try AI
What is it good for?
AI is a generalist. It’s really good at automation of repetitive and remedial tasks. It’s also pretty good at basic research (more on this later). It can gather information and synthesize it for fast consumption. For basic advice on say planting, starting a workout routine, or coding a short script, it’s great.
I use AI in my conceptual photography. Below is a recent shoot with a model covered in ink. There was no way for me to have the model in a vat of ink on location, so I filled in the rest with AI in post. I plan on shooting a cupid concept next month where I’ll use AI to add wings in post. I’m not going to make a model walk around Boston with massive angel wings, and I’m not going to spend the money on life like ones for a single shoot either. I also use it for concept refinement and presentation to models.


It’s also incredible at knowledge retrieval by drawing on large data sets that a human being couldn’t possibly remember. Scientists and researchers are successfully using AI to create new compounds, materials and pharmaceuticals. I’ve seen software accurately assess charts and news and provide solid financial advice. AI is perfect for running millions of simulations for testing and exploring solutions for new materials to medicines. That’s great and we should have more of that.
So what’s the problem?
Humans are really bad at predicting the future. Just scroll through a young mans DraftKings betting history or mine. Until we figure out how to predict the future, we’re not going to make a machine that can do it. We also don’t understand consciousness. Like cavemen with fire, we can manipulate consciousness (especially with fun drugs) but we don’t understand it. We can’t even define it. AI won’t spit out the next big business idea, nor will it even produce good results on how to become a millionaire. It (probably) won’t become sentient and kill us all.
Garbage in and garbage out
The bias is baked into the cake. I said it’s good at basic research but it’s not good at advanced research. It’s got the garbage in/garbage out problem. And hoo boy does the internet have ton of garbage. Basic information for beginners is usually pretty standard for most subjects but users seeking advanced knowledge will find it lacking. 60% of it’s answers are wrong. It likely won’t give you correct information on advanced horticulture, accurate training and diet plans and it won’t write complex code bases. That’s because advanced subjects are highly debated and often changing. Thus making AI inefficient because users will inevitably have to go back and fact check results. In fact, ChatGPT has this disclaimer.
Which brings me to the biggest weakness. Problem solving. Understanding context and applying creativity are aspects of problem solving. Humans are great at it because we are creative. Similar to consciousness, we don’t fully understand creativity. It’s difficult to define but we know it when we see it. Even humans experience difficulty repeating creative magic. Until we do, we won’t be able to create machines that solve problems creatively.
To further the point, Feng Zhu demonstrates what AI cannot do in the world of concept art. It can render beautiful images but it won’t solve issues like movement, space, and functionality. My full time job is ̶y̶e̶l̶l̶i̶n̶g̶ ̶a̶t̶ ̶u̶n̶s̶u̶s̶p̶e̶c̶t̶i̶n̶g̶ ̶s̶t̶r̶a̶n̶g̶e̶r̶s̶ UX design. I haven’t used much of it for UX specifically. My product manager has used AI to flesh out wireframes in order to get an idea across, but she’s already done the problem solving in her brain.
Most software is too context dependent for a generalist to design. When designing banking software, I couldn’t possibly have Figma’s AI prototype a flow for a trade blotter that could enter FX trades, perform the trade through another software, send the trade for approval and the record the history. There’s just too much proprietary software that isn’t readily available to an AI. You could possibly train it on specific company software to recognize patterns and replicate them but that’s very limiting and requires a lot of upfront work. There are also industry quirks like the way currencies are paired, and regulatory or legal factors that shape the UIs. And if you managed to teach it all of those things it won’t create anything new. I would have spent more time editing the interface than doing it myself. The juice just ain’t worth the squeeze. Simply put, a human has to come up with the idea and carry it through.
So it won’t design a complicated interface from top to bottom but remember Belichick’s system: focus on the weaknesses and augment the strengths. Sometimes my work can be rushed and hastily done. I often have spelling mistakes or errors where I forget about consistency. I could use AI to check my work by scanning screens for inconsistent design patterns where there shouldn’t be. I’ve had roles where I’ve worn project and product management hats and CX hats. Those tasks can be ripe for AI. For instance scanning support tickets to identify common problems and bugs, making sense of feature requests, analyzing Jira tickets to understand how to t-shirt size feature requests, compliance testing, scheduling meetings, etc. Here’s a brilliant example of a UX designer augmenting their strengths by applying the Belichick method to accessibility.
What I am good at is idea generation and a strong intuition especially when it comes to legacy software. As I’ve mentioned, I currently use AI for refining my photography concepts but there’s nothing stopping me from using it to help generate ideas for smaller more targeted components of a design. Intuition is not an AI skill but LLMs can take words and make sense of them by recognizing patterns. Meaning, I or others can’t always articulate a reason for something but an LLM could take our thoughts, feelings and concerns to clarify them. Zoom has a really nice AI tool that summarizes meetings. Why not use that during user testing and reread the transcript analysis to help put thoughts, actions or non verbal cues into words? That’s powerful.
As technology seeks it’s own level, new skills and jobs are required for those technologies and the periphery world around them. UX Design would have been a useless profession before the computer which means Figma couldn’t exist. No one had heard of an app before the App Store (we called them proggies). What I mean is that technology is not zero sum. Creative destruction occurs and is painful but ultimately it’s additive. I’m willing to guess someone wrote an essay about ‘they’re takin our jerbs’ about some sorta tech shortly before the app store was invented. And boom here we are nearly 20 years later. You don’t know what hasn’t been invented yet.
At a panel discussion I attended, a CTO who will go nameless, used an example of a manager that wanted to refactor a code base. The manager told him it would take months and a team. The CTO said to throw it into Microsoft copilot! The manager did and lo and behold it took a 45k line code base and refactored it to 20k in over a weekend. That’s neat! Also terrifying.
During the Q&A portion, a thoughtful engineer stood up and asked about it. He said to proceed with caution. AI is good at cleaning up small code blocks but it’s not good at engineering systems. Systems require context, reasoning and logic. He was met with a mealy mouthed response that ended in the panel laughing it off.
But he was right. Of course you can’t throw a huge code base into a black box AI and let ‘er rip. That’s insane. Systems are the result of thoughtful problem solving and taking into consideration a whole host of issues like legacy code, regulations, APIs, third party microservices, and meta issues around an organization such as resources. Systems and the people that created them have ingrained knowledge and context. Replicating that with a machine is impossible because new solutions begat new challenges. That’s why Bill Belichick’s system was so successful.
Using AI in that manor creates a Chesterton’s fence problem. It lacks checks and balances, and it can’t self correct or creatively solve issues that arise. It moves fast and breaks things and we’re left picking up the pieces. I’m willing to bet that weekend refactoring escapade got rid of some important code that handled nuanced and context dependent design. When your car breaks, you wouldn’t take it to a chimp and just YOLO pull out wires. And if that chimp were successful even under the blurst of conditions it would still require testing and problem solving around inevitably unforeseen problems. Because it sucks at predicting the future.
The block buster Apple research paper released this week argues that LLMs cannot reason. They’re just fancy pattern recognition machines. That tracks with what I have been arguing here. It struggles with logic problems and gives up quickly. I suspect that’s because humans still cannot recreate the human brain. It couldn’t even solve the Tower of Hanoi problem by cheating (ironically enough) through reverse engineering a code base. To which there are many on the internet.
AI is a tool. If every problem were a nail, the hammer would be king. But it aint and that’s why AI will never be king. Think of it as an exo or mechsuit. It will make us faster and stronger. Where the future lies with AI and its inevitability begins with Belichick’s adaptive system. Find the weakness and use AI for augmentation. Find your strengths and apply AI to make them stronger.
While I do think AI will replace the work that interns and young professionals currently do, I don’t think it will replace them. Prompting is a skill and one that will become necessary as AI becomes more ubiquitous. We will need people to actually use the AI and refine the results.
It won’t be without challenges. Not all people are good at verbal communication. Good prompters and communicators will have better results, while the rest will spend their time editing those results. The way some people can use hammers better than others. There already is a ton of AI slop out there.
I don’t want to gloss over the real problems AI causes. A lot of kids are cheating in school. Both the Atlantic and NY Mag have covered it. As new technologies stay in the zeitgeist, we manage and adapt. More and more teachers are returning to blue books and in person test taking or essay writing.
I’d take a pragmatic approach. Learn it, understand it and take advantage of it where possible. That’ll involve some creativity. AI prompting will be a required entry level skill.
As we grow with AI we’ll start to see how it fits into the broader picture. I wouldn’t act like the caveman that’s afraid of fire but I wouldn’t act like the one who can’t wait to get burned by it. My prediction is that AI will get better (really going out on a limb here). It will continue to flatten and reduce barriers to entry in all sorts of fields for good and for ill. It’s like a washing machine from 100 years ago. Today’s machines are incomparable and magnitudes better. But at the end of the day it can only clean clothes. I still have to do the rest.
P.S.
I had written this post a few weeks ago because I like to take my time when editing. Since then Apple released a large study finding that LLMs cannot ‘think’. I added a paragraph to address this revelation. But for further breakdown read Gary Marcus.
AI-first? was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.