Here’s some context for the question. When image generating AIs became available, I tried them out and found that the results were often quite uncanny or even straight up horrible. I ended up seeing my fair share of twisted fingers, scary faces and mutated abominations of all kinds.
Some of those pictures made me think that since the AI really loves to create horror movie material, why not take advantage of this property. I started asking it to make all sorts of nightmare monsters that could have escaped from movies such as The Thing. Oh boy, did it work! I think I’ve found the ideal way to use an image generating AI. Obviously, it can do other stuff too, but with this particular category, the results are perfect nearly every time. Making other types of images usually requires some creative promptcrafting, editing, time and effort. When you ask for a “mutated abomination from Hell”, it’s pretty much guaranteed to work perfectly every time.
What about LLMs though? Have you noticed that LLMs like chatGPT tend to gravitate towards a specific style or genre? Is it longwinded business books with loads of unnecessary repetition or is it pointless self help books that struggle to squeeze even a single good idea in a hundred pages? Is it something even worse? What would be the ideal use for LLMs? What’s the sort of thing where LLMs perform exceptionally well?
Anything where accuracy does not matter. Writing e.g. sports commentary articles.
Exactly.
LLMs are ideally suited for replacing corporate middle managers everywhere.
deleted by creator
That’s a pretty cool site. Next time Bing fails me, I’ll try this site to see if the results are any better.
I use it to add more dimensions to my D&D sessions. For example: every town now has at least 1 shop that sells t-shirts. I describe the setting the ChatGPT then ask it to come up with 10 shirt ideas, 3 or 4 or which will be pretty good. One of my players has started collecting the shirts.
One time GPT even came up with a shirt design that I could use as a major plot clue. The players missed it, but it would have helped them out quite a bit.
Oh, that’s interesting. You could also ask GPT to generate names and descriptions for places and NPCs according to your specifications. I suppose you might still need to modify these things a bit so that everything works in the story you’re building.
About 1/3rd of what I get with 3.5 is good. Maybe half of with version 4. For example, if I want a specific roll table I ask for 40 options so I get 15 good ones and only need to think of 5 on my own.
I’ve been using gpt to make dnd content and it’s been great overall, but names are it’s weakest part
Nearly impossible to get it to make a name that isn’t adjective-noun
Yeah gpt is a Greta DM assistant. Can also help generate enemy stats for encounters.
With the proper documentation llms are great at helping with code. Take phind which uses GPT-3.5 but with sources. Its great for small code snippets and pulls it’s answers for documentation and stackoverflow
I’ve had free access to github copilot since beta and it’s great, especially when working with unknown libraries or languages. I don’t have to pull out documention and I can go on with the logic. Of course it often hallucinate, the code it spits out need to be checked, but still, it saves a lot of time.
… Eh, no. I’ve seen GPT generate some incredibly unsound C despite being given half a page of text on the problem.
C is already incredibly unsound /hj
I’ve had some good experiences with asking Bing to write a few lines of VBA or R. Normally, I’ll just ask it solve a specific problem, but then I’ll modify the code to suit my specific needs.
Horoscopes. Fortune cookies. Political speeches.
“ChatGPT, write me some generic wank about Virgos.”
Oh, that’s something I haven’t tried before. Let’s give that a go.
—-
Here is a possible political speech about the importance of grid energy storage and why the government needs to invest in it:
My fellow citizens,
We are living in a time of unprecedented challenges and opportunities for our energy system. We face the threat of climate change, which requires us to reduce our dependence on fossil fuels and increase our use of clean and renewable energy sources. We also face the opportunity of technological innovation, which enables us to harness the power of the sun, the wind, and other natural resources to generate electricity that is affordable, reliable, and sustainable.
But there is a missing link in our energy system: grid energy storage. Grid energy storage is the ability to store excess electricity when it is abundant and cheap, and release it when it is scarce and expensive. Grid energy storage can help us integrate more solar, wind, and distributed energy resources into our grid, without compromising its stability and reliability. Grid energy storage can also improve the efficiency of our grid, by reducing the need for costly and polluting peak power plants, and by providing essential services such as frequency regulation, voltage support, and black start capability. Grid energy storage can also enhance our energy security, by making us less vulnerable to power outages, cyberattacks, and natural disasters.
Grid energy storage is not a futuristic concept. It is a reality that is already being deployed around the world. According to the International Energy Agency¹, the total installed capacity of grid-scale battery storage stood at close to 28 GW at the end of 2022¹, most of which was added over the course of the previous 6 years¹. China led the market in grid-scale battery storage additions in 2022, with annual installations approaching 5 GW¹. The United States followed closely, with 4 GW commissioned over the course of the year¹. Other countries and regions, such as India¹, the European Union¹, Australia⁵, and South Korea⁵, have also set ambitious targets and policies to support the development of grid-scale battery storage.
But we cannot afford to be complacent. Grid energy storage still faces many challenges that need to be overcome. These challenges include planning, regulation, standardization, valuation, environmental impact, safety, and social acceptance⁶. We need a coherent and comprehensive strategy to address these challenges and unlock the full potential of grid energy storage for our energy system.
That is why I am calling on our government to invest in grid energy storage as a national priority. We need to allocate more funds for research and development of new and improved storage technologies, materials, and systems⁹. We need to create a level playing field for storage technologies to compete with other resources in the electricity market⁷. We need to update our codes and standards to ensure the safe and reliable operation of storage systems⁶. We need to develop methods and tools to assess the value and benefits of storage systems for the grid and society⁸. And we need to engage with stakeholders and communities to raise awareness and foster trust in storage systems⁶.
Grid energy storage is not only a technical solution. It is also a political vision. A vision of a clean, resilient, and smart energy system that serves the needs of our people and our planet. A vision that we can make a reality if we work together.
Thank you for your attention.
Source: Conversation with Bing, 16/10/2023 (1) Energy storage - IEA. https://www.iea.org/energy-system/electricity/grid-scale-storage. (2) On-grid batteries for large-scale energy storage: Challenges and … https://www.cambridge.org/core/journals/mrs-energy-and-sustainability/article/ongrid-batteries-for-largescale-energy-storage-challenges-and-opportunities-for-policy-and-technology/3671E7C0E8F8B570FDA6C8321E5DD441. (3) Utility-Scale Energy Storage: Technologies and Challenges for an … https://www.gao.gov/products/gao-23-105583. (4) Review of electrical energy storage technologies, materials and systems … https://pubs.rsc.org/en/content/articlelanding/2018/ee/c8ee01419a. (5) Energy storage growth means data and planning challenges for grids. https://www.energy-storage.news/data-availability-and-resource-planning-challenges-for-grid-operators-as-more-energy-storage-comes-online/. (6) Energy storage technologies opportunities and challenges in smart grids … https://ieeexplore.ieee.org/document/7548263/. (7) Grid energy storage - Wikipedia. https://en.wikipedia.org/wiki/Grid_energy_storage. (8) Benefits of energy storage | ACP. https://cleanpower.org/facts/clean-energy-storage/benefits-of-energy-storage/. (9) Requirements and Benefits of Grid Storage. http://large.stanford.edu/courses/2013/ph240/cabrera1/. (10) undefined. https://doi.org/10.1557/mre.2018.11.
—-
I’m not an expert on political speeches or the power grid, so I didn’t find much to complain about. Actual politicians and grid engineers might disagree though.
I feel LLM created texts often use rigid structuring along with the fitting linking words and phrases – “on one hand…, on the other hand”, “furthermore”, “in conclusion”. Like a high school student writing an essay. Also the content may or may not be correct and is mostly just stolen from several sources and patched together without any thought and care – also like a high school essay. So I’m gonna go with that.
TL;DR ChatGPT = What to Expect When Expecting
I’ve had some success quietly replacing middle-to-executive management with LLMs.
It’s not perfect, but the quality and coherence of the ideas went up a moderate amount. Obviously a good CEO is a valuable thing, but lacking that, ChatGPT does OK at defining company direction and strategy.
It’s not good enough to replace a half-decent copy writer though.
It’s not good enough to replace a half-decent copy writer though.
You severely underestimate the demand for crappy copy that AI is perfectly able to supply.
I see your point, and I agree there will indeed be a lot of demand. My own strategy is to move against this kind of trend, though. When the competition focuses on SEO, dark patterns, and cheap crud – double down on quality and customer loyalty. When they are over-focusing on quality, then make it cheap, cheerful, and easy to find :P
On the boards I advise (just a few, I’m not that influential), a lot of the use of LLMs has stemmed from (frankly) lazy executives not doing their job (their jobs are mostly judgement and delegation – this is a failure of both). Quality control balked at what they suggested publishing (it was really nowhere good enough, and off-brand). There’s this lesson I hold to heart, that once something stops doing the things that give it identity, it begins to fragment and fall apart. Whether its Greece, Rome, one of several Chinese dynasties, a company (Radio Shack! Sears!)… or all those executives and managers in retirement when there’s no more decisions to make or people to manage :D
So yeah there’s going to be a big demand for it, but that’s exactly why I consider our copy writers and designers more valuable now. It’s an opportunity for them to shine. Should be easy to retain them (or hire more) in the coming market too – and for the current executive, what a missed leadership opportunity! I’m not blameless either – my job is to persuade them, and I haven’t succeeded.
Anyway, that’s a little slice of my life, which I hope you found entertaining.
Don’t get me wrong though – I do love LLMs and also image diffusion models. I’m really excited by their future, especially for coding and high-level planning and reasoning! They’re not that good at these things yet, but I think it’s going to happen. I could make so many excellent things to share with the world – e.g. even if they just help me reliably debug faster, or if it codes and I write the unit tests by hand!
Tech troubleshooting, specifically for Linux but I’ve used it successfully for Docker as well.
I used it to learn Ansible and Terraform. It probably does 80% of the work with me occasionally having to point out that it made up a module, is using a deprecated format or something like that. Still a huge time saver though. In ten seconds it can output something at least as good as what I’d produce with 15 minutes of reading documentation and searching for examples.
That is a valid use for an LLM, especially in easy cases. With more complex cases, I usually end up getting completely incorrect tech advice, but eventually I’ve always managed to make things work. It may require a few messages back and forth, but eventually I’ve managed to narrow it down enough that I can ask the right question and I finally get the right answer.
it’s in the name: generative pre-trained transformer. the one thing ChatGPT and GPT-3/3.5/4 are truly good at is transforming data. it can restructure paragraphs to have a different flow, take class notes and make flashcards out of them (that’s how i use ChatGPT), or even take non-textual data and potentially present it in textual format if trained right
I’ve actually used GPT to summarize book reviews on Amazon and Goodreads. I’m not entirely sure if Bing really reads all the book reviews I tell it to read, but it seems to be pretty good at finding the details that matter to me. In my prompt I’m telling it to skip all the 5-star reviews so that it will only focus on finding common complaints. Based on the summary, I’ll then decide if I can live with the flaws the book has.
I’ve given some more thought to this transformer thing. It really is in the name, just like you said. Transforming text into another form really is the main area of expertise. I feel like I should give GPT some transformation tasks more often. Generating new stuff can be fun, but that might not be the best way to use it.
It’s a very simple observation about the name, but I think pointing it out has really changed the way I think about GPT. Thanks!
I’ve found LLMs to be very helpful with handling boilerplate and helping me debug my programs.
I think roleplaying a virtual girlfriend works quite well with a LLM. Other characters in role-play scenarios like a text adventure, too. It can roleplay mage, vampire, goblin…
Dialogue and things like that work way better than long-form text. It sounds better than a longer text by ChatGPT with that certain ChatGPT tone to it. (My opinion)
And it’s not important to get the facts right. I’ve read and prepared some summaries with ChatGPT and it is often riddles with inaccuracies, sometimes even misrepresenting things badly. I don’t care too much in my AI dungeon adventure.
Not that this is to do with image generation I’ve always thought translating legal jargon down to a single paragraph in English would be a good purpose for AI. Imagine bullet points of all the major things you’re handing over when clicking “I agree” on a apple terms and agreement?
Or better yet, aks it to summarize all the things that matter to someone who isn’t an app developer, isn’t trying to sue Apple, isn’t trying to hack the software, isn’t trying to build anything on their software, isn’t trying to sell anything or doesn’t even run a business of any kind. There are more than a few paragraphs specifically trying to counter all of these special cases, and they don’t really concern someone who just wants to use the iPhone to call their grandma.
Generating a large amount of utterances to train your cloud service language model for a bot because I’m sure not writing hundreds of utterances all asking the same thing.
You could defend someone in court for free!
Not with how ChatGPT does not have the ability to deny when I tell him that 2 plus 2 is actually 5. It just accepts it.
I feel like the other guy would probably end up spending the rest of his life in jail and he might even sue me for making his situation even worse. Sounds risky.
Are you the defense attorney using GPT in this scenario??
Yeah, that’s the scenario I was immediately thinking of. Imagine someone was charged for theft and ended up getting a sentence for murder, because GPT did such a fine job and the lawyer couldn’t be bothered to check anything or use their own brain. Well, at least it was free.
Probably CAD work. AI could probably learn from existing CAD files already freely available. Then it could replicate them using any number of free cad systems such as freeCAD. Then it could expand into complex cad like wiring, circuit board generating etc. Maybe it could tackle BIM to make house plans. Or it could not only generate parts and assemblies, but also perform structural FEA. Maybe even it could expand to all sorts of computational analysis and even begin by first analyzing what type of analysis would be needed…a complete engineering AI replacement for engineers… My job is going bye bye probably.
If you open a file like that in a text editor (such as notepad++), do you see a a messy hex porridge or does it have structure an LLM might be able to understand?
All cad files have a defined structure which could be interpreted. Unfortunately many cad files are proprietary and decoding them externally to the closed-source programs that create them is a massive undertaking that would get you sued
It’s probably not necessary for it to be human readable.
Funnily enough: revealing plagiarism. Or even just judging the originality of a given text. Train it to assign an “originality value” between 0 (I’ve seen this exact wording before) and 1 (this whole text is new to me) to help universities, scientific journals or even just high schools judge the amount of novelty a proposed publication really provides.
Recently I’ve seen some discussion surrounding this. Apparently, this method also gives lots of false positives, but at least it should be able to help teachers narrow it down which papers may require further investigation.
Recent studies show it doesn’t work at all, and has likely caused irreparable harm to people whose academics have been judged by all of the services out there. It has finally been admitted that it didn’t work and likely won’t work.
Well yeah that approach would work if you train it on one model but that doesn’t mean it would work on another model, but for the normal user who uses chatgpt it is probably enough to detect it at least 80-90% of the times