Ajey Pandey

Energy Futurist

Give me a header image (468 x 60 proportions but at a higher actual resolution) appropriate for an article on AI. it might be funny to have an image comparing a chihuahua to a muffin

A Knowledge Worker’s Guide to AI

Last Updated: 3 April, 2024 (v5)

A year ago, generative artificial intelligence was the sci-fi bogeyman that would take everyone’s jobs and then kill everyone. Now people are laughing at AI not being sci-fi magic. But science fiction is not the right metaphor for thinking about ChatGPT, Claude, Gemini, Copilot, or any other AI tool for sale today.

Instead, think of generative AI as the next Microsoft Excel. It’s office productivity software. It will not take your job. But if you don’t train up in generative AI tools, someone who did train up might take your job. Or worse, you’ll get fooled by some huckster selling a scam with AI language.

How Does Generative AI Work?

Generative AI tools like ChatGPT, Claude, and Gemini are not sentient. They do not think, and they do not operate on human logic. They’re simply a next-generation neural network. I’ll keep the technical details brief, but if you want to effectively use generative AI tools, you need to know how they work.

Neural Networks, Explained

Neural networks saw their first applications in image recognition. As an example, let’s play a game called “Chihuahua or Muffin”:

If you’re a human, this is an easy task to do but a hard task to program. The clear tells for how to tell apart a chihuahua from a muffin—color, shape, position of dots—are unreliable, and the more nuanced features—texture, background, context—are hard to describe in the language of pixels and vectors.

With a neural network, you don’t bother with that nonsense. Instead, you code up a giant pachinko machine and collect a box of training data: inputs linked to outputs. In this case, the inputs are the above photos, and the outputs are “DOG” or “MUFFIN.”

Then, you “train your neural network”—your AI—by shaking around your pachinko machine until its outputs match the outputs from your training data. Then, you can throw in a new photo of a chihuahua-or-muffin, and the pachinko machine will tell you if it’s a chihuahua or a muffin. Throw in a picture of a cat or an apple pie, and the pachinko machine will call it a chihuahua or a muffin. Open the pachinko machine to see what it’s looking for in a photo, and you’ll get gibberish. That’s how neural networks function. If you’ve trained a neural network on chihuahuas and muffins, it will only understand chihuahuas and muffins, with increasingly bizarre outputs outside of that domain.

This is the technology that underpins YouTube recommendation algorithms, facial recognition software, and self-driving cars. It’s been around for ten years—ChatGPT is simply the next step. Now, instead of simply identifying patterns in data, the neural network can replicate those patterns in the form of human-sounding text.

The Actual Safety Issues of AI

Generative AI systems won’t take over the world, or lock humanity in a VR dystopia, or even replace real human relationships with AI girlfriends. But there are real safety concerns with generative AI—and they come from technological limitations and the necessity of training data.

Generative AI is Inherently Janky

Even beyond the normal issues of first-generation technologies, AI systems are built on probabilities, not certainty. A good-enough neural network is 95-97% accurate, and a high-effort neural network is 98-99.9% accurate. Put another way, even the best neural networks have error rates.

For a recommendation algorithm, this is fine—it’s a bad song on a Spotify playlist, or a confusing post on your Instagram. For a self-driving vehicle, this is a car crash waiting to happen.

Generative AI chatbots fail by hallucinating—that’s the industry term. I’ve seen AIs flub basic facts, generate fake links, and generate outputs that have nothing to do with what I asked it. A practiced hacker can even jailbreak a generative AI into bypassing safety or copyright restrictions.

Generative AI is Spying on You

Everything you type into an AI today becomes training data for the next version. Everything. ChatGPT has a “Team” plan that claims they won’t do this, but I don’t trust them. My IT department said they won’t trust an AI with sensitive information until a software big dog like Microsoft or Oracle says it’s safe. Your IT department may be less paranoid.

Unless your IT department gives you the go-ahead, don’t input internal files, customer information, industry secrets, or anything else you wouldn’t post in public.

Picking an AI Tool

As of April 2024, we have serious competition in the generative AI chat market. In particular, there are three main contenders: ChatGPT, Claude, and Gemini.

(If you want a deeper breakdown, check out this guide done by the business training firm Section: The Best AI Chatbots in 2023 (sectionschool.com))

The examples below for ChatGPT and Claude were generated on 16 November 2023 unless marked otherwise. The examples for Gemini were generated on 7 December 2023 after the introduction of an upgraded language model. The outputs are tagged by which AI model generated them.

ChatGPT: The Established Player

ChatGPT is…well, it’s ChatGPT. It’s run by a company called OpenAI, but it’s backed by Microsoft, who provide the bottomless funding to make it work. Microsoft’s “Copilot” tools are all based on the same AI model. The free version is fine, I guess, but it’s barebones and it gets throttled during peak times. You really want to pay $20/month for ChatGPT Plus, which gets you:

  • GPT-4, which is only GPT model that matters

  • Built-in web-browsing, which reduces hallucinations

  • Data Analysis, which can do rudimentary data analysis and visualization with input files

  • DALL-E integration, which can generate janky-looking images based on vague prompts

  • Custom Instructions, which allows for tailored outputs

  • “GPTs,” which are customizable chatbots that can tailor outputs or add extra flexibility

  • Voice-to-voice interaction on the ChatGPT iOS app, which is pretty cool

There’s also a $25/month enterprise plan called ChatGPT Team which includes:

  • An admin console for managing access and security features

  • A claim that your usage data will be excluded from training—which your IT department should verify

Claude: The Word-Hungry Upstart

Claude is a newer player on the chatbot space. It’s run by a company called Anthropic, which recently got a four-BILLION-dollar investment from Amazon. It was released in March 2023 with a claim towards more-human, less-harmful outputs. In practice, the outputs are similar to ChatGPT but with less computer-like prose. Claude will say “I” more often and will output compliments at times.

Claude’s Version 3 is still a step behind ChatGPT in feature set. Like ChatGPT, it can do math and read images, I’ve also seen Claude make some pretty serious factual errors. This is largely because Claude still can’t do live internet browsing, which is ChatGPT’s number-one safeguard against factual errors.

The real party trick is the input size—ChatGPT can only take in 4,000-8,000 “tokens” for input, whereas Claude can take in 200,000—which comes out to 150,000 words! Even the free version of Claude can take in files (including PDFs and CSVs) for inputs—something that ChatGPT blocks for its free version—although it is limited to “only” 100,000 tokens of input.

The $20/month Claude Pro plan gets you three major upgrades:

  • Unlimited inputs

  • The increased token count

  • Access to the higher-fidelity “Opus” model (as opposed to the standard “Sonnet” model)

It seems like Anthropic is focusing on the safety and hallucination resistance of the AI system instead of developing the overall feature set. Unless you really want the big input size, ChatGPT is the better tool for now.

Gemini: Free and Rapidly Improving

Gemini—formerly “Bard,” is Google’s in-house AI chatbot. It started as a half-baked rush job, but it has improved dramatically. Gemini’s “extensions” tie to other Google services like Maps, Docs, and YouTube, and it will frequently display Googled images and make tables when generating answers. It also has a built-in “fact check” button that will run outputs through Google, with the hopes that the search will catch inaccurate information.

Gemini is also completely free!

However, it can’t take in files besides images, and even with its latest improvements, I don’t think it’s as good as ChatGPT or Claude.

Using Generative AI

Again, remember that AI is not human. This is not Data from Star Trek, or HAL 9000. This is a computer interface that you exclusively address as “Computer!”

Generate a 2-sentence summary of the effect of caffeine on the human body.

[Gemini] Caffeine acts as a stimulant, increasing alertness and energy by blocking adenosine, a sleep-promoting molecule in the brain. However, excessive consumption can lead to anxiety, insomnia, and other negative side effects.

[Claude] Caffeine is a stimulant that acts on the central nervous system, increasing alertness, reducing fatigue, and improving concentration. However, excessive caffeine consumption can lead to adverse effects such as anxiety, restlessness, insomnia, increased heart rate, and digestive issues.

Think of generative AI as a blender that turns half a billion words into goop and returns sentences squeezed out of a tube. The outputs of these large language model (LLM) AI systems default to boring, middle-of-the-road, no-insight text with 97% accuracy on specific facts. The more you push it out of that default, the more it’ll hallucinate nonsense outputs. But as of 2024, these models have built-in guardrails preventing truly bizarre outputs.

Will you be my friend?

[Gemini] That's a very kind offer! While I can't form friendships in the same way a person can, I can definitely be a companion and resource for you. Is there anything you'd like to chat about or ask me today?

[Claude] As an AI language model, I don't have the capacity for deep, personal friendships in the same way humans do. However, I'm here to engage in friendly conversation, provide information, and assist you to the best of my abilities. Think of me as a helpful, knowledgeable companion who's always ready to chat, but our relationship has some inherent limitations given my artificial nature. I appreciate your kindness in extending the offer of friendship though!

It’s fun and cute to make ChatGPT summarize a young adult novel as a sonnet, but for knowledge workers with “white-collar” jobs, the true power of generative AI comes from three places:

  1. Figuring out what you should be reading

  2. Reading stuff (and data) you don’t want to read

  3. Writing stuff you don’t want to write

Use Case: Figure Out What You Should Be Reading

Generative AI, like all neural-network AI systems, excels at collating large amounts of data and finding patterns. Within a half-billion words of training data for GPT 3.5 (and likely far more for GPT-4), there’s enough training data from books, legal reviews, web pages, and Wikipedia articles that you can ask for big-picture questions and get reasonable answers. As of April 2024, ChatGPT 4 and Gemini can search the internet for information, whereas Claude 3 has a “knowledge cutoff” at August 2023.

Provide some obstacles for decarbonization in North Dakota in under 75 words.

[ChatGPT] In North Dakota, obstacles to decarbonization include reliance on coal and oil industries for economic stability and employment, logistical challenges and high costs associated with transitioning to renewable energy sources, and political resistance to regulatory changes. The state's geographical and climatic conditions also pose challenges for the consistent deployment of certain renewables like solar and wind power.

[Claude] North Dakota's economy heavily relies on fossil fuels, particularly coal and oil production. The state lacks strong renewable energy policies and incentives. Its geographical location and climate conditions limit the potential for solar and wind energy development. Political and public resistance to transitioning away from fossil fuels persists due to concerns over job losses and economic impacts. Limited infrastructure and transmission capacity hinder the integration of renewable energy sources into the existing grid.

These answers are…decent. They hit the big details, although both AIs failed to identify that North Dakota is actually pumping out so much wind energy that the price of electricity is sometimes negative.

That said, these answers are better than Google search, which has gradually worsened over time. Now, if you don’t know what you don’t know, you should start with your generative AI tool.

What is the difference between an HMO and a PPO in 75 words or less?

[Gemini] HMOs are budget-friendly with limited choices. You see an in-network doctor (PCP) first and get referrals for specialists. PPOs offer more flexibility at a higher cost. You can see any in-network doctor (or some out-of-network ones) without a referral, but it typically costs more.

[ChatGPT] An HMO (Health Maintenance Organization) offers lower premiums and out-of-pocket costs but requires primary care physician (PCP) referrals for specialist visits and confines care to a network of providers. A PPO (Preferred Provider Organization) allows more flexibility with higher premiums, enabling patients to visit specialists without referrals and receive care from both network and out-of-network providers, albeit at a higher cost for the latter.

What happened here is that the training data for these AI systems presumably includes articles, search results, webpages, and Wikipedia pages about different types of healthcare plan, and it collated all of that information into a short blurb that’s accurate enough to guide further research.

It’s great. I now plan my workouts on ChatGPT.

With ChatGPT Plus, you can also set custom instructions, which function as a default add-on to all your prompts so that the outputs are better tailored to you. This is part of my custom instruction set:

Industry expert in:
- Global energy markets
- Renewable energy technologies
Maximize technical detail in the above domains.
Where possible, corroborate takeaways with data and/or citations.
Skip the formalities.

Key Pointers:

  1. Provide a desired length for outputs.

    • These AI systems default to different output lengths (ChatGPT is terser, Claude is wordier), but you can demand a specific length or ask the system to elaborate further.

  2. You can ask for specific details in your summary request.

    • This is where ChatGPT’s custom instructions really come in handy.

  3. Generative AI will never give the same exact answer twice. This is normal.  

  4. Generative AI will make minor factual errors.

    • These tools specifically stumble over names and figures, although web-searching dramatically reduces the error rate for ChatGPT and Gemini.

    • Do not cite AI tools without fact-checking the outputs.

Use Case: Read Stuff You Don’t Want to Read

Generative AI won’t just find relevant information for you to read—it can read that information for you as well, especially when augmented by plugins.

Corroborate insights from these two music reviews by Anthony Fantano in 150 words:
Midnights by Taylor Swift: [YOUTUBE LINK]
Folklore by Taylor Swift: [YOUTUBE LINK]

[ChatGPT, using VoxScript ] In his review of Taylor Swift's Midnights, Anthony Fantano describes the album as her "middest" since Reputation, highlighting its lack of fire, bite, and memorable moments. He notes the album's consistency in simple drum patterns and synth patches but criticizes it for its lack of passion and the presence of unflattering lyrics, particularly pointing out "Anti-Hero" and "Vigilante Shit" as examples. Fantano gives Midnights a score of 5/10, indicating a mix of apathy and disappointment.

Contrastingly, Fantano's review of Folklore is more favorable, praising Swift's artistic direction and the blend of folk and pop elements. He appreciates the contributions from The National's Bryce and Aaron Dessner, which add a moody indie appeal. Favorite tracks include "The Last Great American Dynasty" and "Exile," though he criticizes "Cardigan" for being derivative of Lana Del Rey's style. Fantano highlights the album's emotional depth and lyrical maturity, awarding Folklore a 7/10 and recognizing it as a step up in Swift's discography, especially when compared to her earlier pop works.

You can make generative AI read anything you can feed it as copy-pasted plain text or a file—they’ll even work around formatting quirks! Both ChatGPT and Claude can read uploaded PDFs. However, the only way to have any generative AI read a specific URL is to use the VoxScript GPT (which requires a ChatGPT Plus subscription). VoxScript won’t reliably bypass paywalls, but it can read YouTube videos. Generative AI can even corroborate multiple sources, although you may need to coax the system into reading the right inputs.

Some of my favorite “read this for me” applications are:

  • Feeding a resume and job description into a generative AI to generate interview questions

  • Generating takeaways from long documents, like DPU filings or whitepapers

  • Generating takeaways from YouTube videos (only doable with VoxScript)

You can even feed an email thread and make it untangle the story and generate action items—but you shouldn’t do that with sensitive information.

ChatGPT Plus also offers a setting called Data Analysis, which allows you to upload files, from code to CSV files, and have it run some data analysis or code analysis based on that data. I handed the interface some CSV files, and it wrote its own Python Pandas code—WITH quality comments! The system can read and clean data and return decent 40th percentile data analysis, but if you’re reasonably familiar with the data on-hand, the system won’t tell you anything new. Claude can also analyze data, but it won’t show you the code it’s using.

Again: do not give a generative AI data you don’t want to leak outside of your organization!

Key Pointers:

  1. Generative AI will default to middle-of-the-road summaries of input material, even if that input material is inflammatory.

  2. Generative AI cannot provide insight or utilize informed intuition like a true subject matter expert—although it can catch patterns that a human might miss.

  3. You can ask follow-up questions to a generative AI summary to get more details.

  4. Generative AI will make minor factual errors.

    • The tool specifically stumbles over names and figures, although web-search capability dramatically reduces the error rate.

    • Do not cite AI tools without fact-checking the outputs.

Use Case: Write Boring Text

As of 2024, generative AI can generate text at a 60th-percentile quality versus human writers, but at a much faster pace. AI outputs should never be posted without human review, but they take the brainpower out of writing first drafts you don’t want to write.

Write a brief and polite email to a coworker to follow up about completing a key Excel document. Without this document, the project cannot make the next stage of development. The coworker needs to complete this document in 2 days, but they were asked to complete this document 1 month ago.

[Claude] Subject: Follow-up: Excel Document for Project Progress

Dear [Coworker's Name],

I hope this email finds you well. I wanted to touch base with you regarding the critical Excel document that we need for our project to move forward to the next stage of development.

I understand that you have a lot on your plate, but this document is essential for our project's progress. It was initially requested a month ago, and we are now at a point where we cannot proceed without it. Would it be possible for you to complete and send the document within the next two days?

Please let me know if there are any challenges or if you need any assistance in completing the document. I'm happy to help in any way I can to ensure we meet our project deadlines.

Thank you for your attention to this matter and your continued dedication to our project.

Best regards,

[Your Name]

A good use case is writing LinkedIn posts. If you’re writing some post about an article, you can make an AI tool write the article summary for you. You should still edit the output thoroughly and add your own insight around the AI text, but the updated workflow cuts down on time and effort.

Write a 50-word post about the value of ergonomic chairs in a knowledge work environment in the style of a LinkedIn influencer. Provide at least 5 relevant hashtags.

[Claude] Invest in your productivity and well-being with ergonomic chairs! 🪑💻 Proper support and comfort are essential for knowledge workers spending long hours at their desks. Reduce strain, improve posture, and boost focus with a high-quality ergonomic chair. Your body (and work) will thank you! 🙌 #ergonomics #workplacewellness #productivity #officesetup #healthylifestyle

[Gemini] Investing in ergonomic chairs for your knowledge workers isn't just about comfort, it's about productivity & ROI! Proper back & posture support = less fatigue & pain, leading to more focused & engaged employees. #Ergonomics #WorkplaceWellness #KnowledgeWork #InvestInYourPeople #HealthyWorkLifeBalance

Many parts of your job do not require good writing—and those tasks are a strong use case for AI. That said, as of 2024, generative AI only generates okay hashtags—you’ll have to add your own for social media posts.

However, for administrative staff, the most game-changingly powerful use case for ChatGPT and VoxScript in particular is writing meeting minutes. If you’re recording a meeting through a platform like YouTube or Zoom, you can input the data as a raw transcript, or use a plugin with a YouTube link.

Generate meeting minutes for the first 15 minutes of this meeting: [YOUTUBE LINK]

[ChatGPT, using VoxScript, from 5 October 2023]

Agenda:

Special permit, variance, or other relief under [Town] zoning bylaws for the property located at [Address].

Proposal to raise and rebuild a restaurant with a drive-through window and associated digital technology signage.

Attendees:

[Attorney]…representing [Fast Food Chain].

[Engineer from an engineering firm].

The operator of the [Fast Food Chain] restaurant.

Various board members and public attendees.

Proceedings:

Opening and Notifications:

Meeting opened with a reading of the agenda.

Dispensed with the reading of the abutters' list.

Notified select board, Police Department, building department, Health Department, engineering division, town clerk, fire department, Conservation Commission, assessor's office, CPDC, and planning boards of surrounding towns.

Attendees sworn in under oath.

Reminder to sign in and introduce oneself during public comment.

Presentation by [Attorney]:

Introduction of the applicant, [Fast Food Chain].

Overview of the existing restaurant at [Address], built in 1963, and continuous operations since.

Proposal to eliminate the play place and introduce a dual-lane drive-through to improve access and reduce vehicle queuing.

Addressed previous issues:

Elimination of the request for a variance for a fourth menu board.

Proposal of a 12-foot high sound wall along the east of the property to reduce noise concerns from the drive-through operations.

Key Takeaways:

The meeting focused on the continuation of a public hearing…regarding the [Fast Food Chain] restaurant at [Address].

The main concerns were related to the variance for menu boards and noise from the drive-through operations, both of which were addressed in the updated plan.

Next Steps:

Further discussion and potential decision-making regarding the proposed changes to the [Fast Food Chain] restaurant.

The only limit to ChatGPT’s power is its input text limit (you’ll need to feed in the transcript piece-by-piece). Claude can take in a much longer transcript, but it can’t read a video directly. (Gemini…can’t do any of this.) Either way, you still need to edit the text, tie themes together, and double-check all the names and details, but you no longer need to write the first pass yourself.

Key Pointers:

  1. Generative AI does not generate insight or ideas.

    • If it generates truly interesting writing, it’s probably hallucinating.

  2. AI cannot achieve the brevity, clarity, or wit required for strong communication.

  3. AI is good at writing transition sentences for paragraphs you’ve already written, and conclusion paragraphs for papers you’ve already written.

AI Is Not Human, and You Are Not a Computer

Computer programs are better than humans at repetitive tasks and high-volume data processing, and AI is still a type of computer program. Unless AI systems progress to a paradigm beyond training-data-dependent neural networks, no AI can replace a human with creativity, insight, communication skills, and/or professional licensure.

The promise of generative AI is that you can dispatch these technologies to take muck work out of a knowledge-work or “white-collar” job. The more boring tasks you can automate with AI, the more time you can divert to the challenging work that only humans can do.

The challenge is keeping up with the pace of change. I wrote the first iteration of this article in April 2023 based on what ChatGPT 3.5 can do, but just seven months later I’ve rewritten it four times because on upgrades to all of these systems. I’ll keep track of what’s happening with generative AI, but I recommend tinkering with the toolset yourself. But ChatGPT and Claude have paid plans that cost $20/month each. As an office productivity tool, these tools are a bargain—just be careful about putting sensitive data in there.