I have a boss who tells us weekly that everything we do should start with AI. Researching? Ask ChatGPT first. Writing an email or a document? Get ChatGPT to do it.
They send me documents they “put together” that are clearly ChatGPT generated, with no shame. They tell us that if we aren’t doing these things, our careers will be dead. And their boss is bought in to AI just as much, and so on.
I feel like I am living in a nightmare.
Uff. That sounds like a nightmare. I’m glad my job doesn’t force us to us AI. It’s encouraged, but also my managers say “Use whatever makes you the most productive.” AI makes me slower because I’m experienced and already know what I want and how I want it. So instead of fighting with the AI or fact checking it, I can just do shit right the first time.
For tasks that I don’t have experience in, a web search is just as fast. Search, click first link. OR. Sure, I’ll click and read a few pages, but that’s not wasted time. That’s called learning.
I have a friend who works at a company where they have AI usage quotas that affect their performance review. I would fucking quit that job immediately. Not all jobs are this crazy.
AI tends to generate tech debt. I have some coworkers that generate nasty, tech debt, AI slop merge requests for review. My policy is: if you’re not gonna take the time to use your brain and write something, then I’m not gonna waste my time reviewing your slop. In those cases, I use AI to “review” the code and decide to approve or not. IDGAF.
I am building it! Or, well, not it anymore but a product that is heavily based on it.
I think we as a company recognize that the, like, 95% of AI products right now are shit. And that the default path from now is that power continues to concentrate at the large labs like OpenAI, which haven’t been behaving particularly well.
But we also believe that there are good ways to use it. We hope to build one.
The thing your boss is asking you to do is shitty. However, TBQH humanity doesn’t really know what LLMs are useful for yet. It’s going to be a long process to find that out, and trying it in places where it isn’t helpful is part of that.
Lastly, even if LLMs don’t turn out to be useful for real work, there is something interesting and worth exploring there. Look at researcher/writers like Nostalgebrist and Janus - they’re exploring what LLMs are like as beings. Not that they’re conscious, but rather that there’s interesting and complex things going on in there that we don’t understand yet. The overarching feeling in my workplace is that we’re in a frontier time, where clever thinking and exploration will be rewarded.
Dumbass senior contract person and program managers are all for using copilot and I’ve caught several people using chatgpt as a search engine or at least that’s what they tell me they think it is.
Surprisingly reasonable?
I was terrified that entering the corporate world would mean being surrounded by people who are obssessed with AI.
Instead like… The higher-ups seem to be bullish on it and how much money it’ll make them (… And I don’t mind because we get bonuses if the corp does well), but even they talk about how “if you just let AI do the job for you, you’ll turn in bad quality work” and “AI just gets you started, don’t rely on it”
We use some machine learning stuff in places, and we have a local chatbot model for searching through internal regulations. I’ve used Copilot to get some raw ideas which I cooked up into something decent later.
It’s been a’ight.
This is the way. I honestly don’t care how the execs think about ai or if they use it themselves, but don’t force its usage on me. I’ve been touching computers since before some of them existed. For me it’s just one extra tool that gets pulled out in very specific scenarios and used for a short amount of time. It’s like the electric start on my snowblower - you don’t technically need it, and it won’t do the work for you, (so don’t expect it to) but at the right time it’s extremely nice to have.
So far it’s a glorified search engine, which it is mildly competent at. It just speeds up collecting the information I would anyways and then I can get to sorting useful from useless faster.
That said, I’ve seen emails from people that were written with AI and it instantly makes me less likely to take it seriously. Just tell me what the end goal is and we can discuss how to best get there instead is regurgitating some slop that wouldn’t get is there in the first place!
One of my managers is like that, I’ve known him for about 5 years and he’s been the biggest idiot I’ve ever met the entire time. But ever since AI came out he’s turned it up to 11.
Fortunately my other manager can’t stand him, and they have blazing arguments, so generally speaking if he tells me to do something I don’t like / want to do, I go and tattle tell.
I feel like giving AI our information on a regular basis is just training AI to do our jobs.
I’m a teacher and we’re constantly encouraged to use Copilot for creating questions, feedback, writing samples, etc.
You can use AI to grade papers. That sure as shit shouldn’t happen.
My subordinate is quite proud at the code AI produces based off his prompts. I don’t use AI personally, but it is surely a tool. Don’t know why one would be proud at the work they didn’t do and can’t explain though. I have to manage the AI use to a “keep it simple” level. Use AI if there is a use case, not just because it is there to be used…
I vibe code from time to time because people sometimes demand quick results in an unachievable timeline. In saying that, I may use a LLM to generate the base code that provides a basic solution to what is needed and then I go over the code and review/refactor it line by line. Sometimes if time is severely pressed and the code is waaaay off a bare minimum, I’ll have the LLM revise the code to solve some of the problem, and then I review, adjust, amend where needed.
I treat AI as a tool and (frustrating and annoying) companion in my work, but ultimately I review and adjust and amend (and sometimes refactor) everything. It’s kind of similar to when you are reading code samples from websites, copying it if you can use it, and refactoring it for your app, except tailored a bit more to what you need already…
In the same token, I also prefer to do it all myself if I can, so if I’m not pressed for time, or I know it’s something that I can do quickly, I’ll do it myself.
Obsessive.
I am reminded of this article.
The future of web development is AI. Get on or get left behind.
5/5/2025
Editor’s Note: previous titles for this article have been added here for posterity.
The future of web development is blockchain. Get on or get left behind.The future of web development is CSS-inJS. Get on or get left behind.The future of web development is Progressive Web Apps. Get on or get left behind.The future of web development is Silverlight. Get on or get left behind.The future of web development is XHTML. Get on or get left behind.The future of web development is Flash. Get on or get left behind.The future of web development is ActiveX. Get on or get left behind.The future of web development is Java applets. Get on or get left behind.If you aren’t using this technology, then you are shooting yourself in the foot. There is no future where this technology is not dominant and relevant. If you are not using this, you will be unemployable. This technology solves every development problem we have had. I can teach you how with my $5000 course.
lol Silverlight.
In fairness, a lot of those did take over the web for a time and lead to some cool stuff (and also some wild security exploits).
PWAs are cool af and widely used for publishing apps on the App/Play stores. It’s a shame they haven’t been adopted more widely for their original purpose of installing apps outside of those stores, but you can’t get everything you want.
Holy shit… XD
Our devs are implementing some ML for anomaly detection, which seems promising.
There’s also a LLM with MCP etc that is writing the pull requests and some documentation at least, so I guess our devs like it. The customers LOVE it, but it keeps making shit up and they don’t mind. Stuff like “make a graph of usage on weekdays” and it includes 6 days some weeks. They generated a monthly report for themselves, and it made up every scrap of data, and the customer missed the little note at the bottom where the damn thing said “I can regenerate this report with actual data if it is made available to me”.
My “company” is tiny, and only employs myself 1 colleague, and an assistant. We’re accountants.
We self host some models from huggingface.
We don’t really use these as part of any established workflow. Thinking of some examples …
This week my colleague used a model to prep a simple contract between herself and her daughter where by her daughter would perform whatever chores and she would pay for cello lessons.
My assistant used an AI thing to parse some scanned bank statements, so this one is work related. The alternative is bashing out the dates, descriptions, and amounts manually. Using traditional OCR for this purpose doesn’t really save any time because hunting down all the mistakes and missed decimal places takes a lot of effort. Parsing this way takes about a third of the time, and it’s less mentally taxing. However, this isn’t a task we regularly perform because obviously in the vast majority of cases we can get the data instead of printed statements.
I was trying to think the proper term for an english word which has evolved from some phrase or whatever, like “stearing board” became “starboard”. The Gen AI suggested portmanteau, but I actually think there’s a better word I just haven’t remembered yet.
I had it create a bash one liner to extract a specific section from a README.md.
I asked it to explain the method of action of diazepam.
My feelings about AI are that it’s pretty great for specific niche tasks like this. Like the bash one liner. It took 30 seconds to ask and I got an immediate, working solution. Without Gen AI I just wouldn’t be able to grep whatever section from a README - not exactly a life changing super power, but a small improvement to whatever project I was working on.
In terms of our ability to do our work and deliver results for clients, it’s a 10% bump to efficiency and productivity when used correctly. Gen AI is not going to put us out of a job.
My company is doing small trial runs and trying to get feedback on if stuff is helpful. They are obviously pushing things because they are hopeful, but most people report that AI is helpful about 45% of the time. I’m sorry your leadership just dove in head first. That’s sound like such a pain.
Sounds like your company is run by people who are a bit more sensible and not driven by hype and fomo.
Hype and FOMO are the main drivers of the Silicon Valley economy! I hate it here.
I work in IT, many of the managers are pushing it. Nothing draconian, there are a few true believers, but the general vibe is like everybody is trying to push it because they feel like they’ll be judged if they don’t push it.
Two of my coworkers are true believers in the slop, one of them is constantly saying he’s been, “consulting with ChatGPT” like it’s an oracle or something. Ironically, he’s the least productive member of the team. It takes him days to do stuff that takes us a few hours.








