Results 1 to 13 of 13

Thread: Visions of the GPT-4 era

  1. #1

    Default Visions of the GPT-4 era

    We're about to enter the era of GPT-4 (and its cousins obv).

    What do you think it'll look like for you? What'll it bring, apart from an unceasing torrent of brain-melting AI-porn?

    GPT-3 was thrilling; GPT-3.5 was ominous. What we know of GPT-4—and of the industry's current approach to AI—suggests a shitstorm of unforeseeable proportions in our near future. Trivial guardrails, breathtakingly irresponsible governance, limited transparency, zero oversight, and an unprecedented potential for cascading systemic fuckups facilitated by the dumbest weird nerds on the planet. The whitepaper and The coverage of openAI's processes in particular highlight multiple ethical and social risks, but I have no doubt MSFT, Google, Meta etc. will be happy to exacerbate those problems a few months from now.

    Full disclosure: I have friends who're working on making lots of money off of AI and/or incorporating AI into their current work activities. I think it's difficult to appreciate the existing capabilities of this technology without using it or seeing it in use in real-world applications. From a tech perspective, I'm an enthusiast; from a moral perspective, I'm ambivalent.
    “Humanity's greatest advances are not in its discoveries, but in how those discoveries are applied to reduce inequity.”
    — Bill Gates

  2. #2
    From a tech perspective: My impression of trying to work with ChatGPT to get it to actually do something useful is that by the time you've explained to it precisely what you want you might as well have done it yourself. It's also wrong quite frequently, so you have to check its output manually anyway. I am more than impressed, however, by it's ability to understand natural language. It might get things wrong, but it rarely misunderstands what you're asking for. That is, after all, all it was ever supposed to be: a language model. I suspect its future lies in being a natural language interface for other systems, AI/ML or otherwise.

    From a social perspective: fucking chaos. Everything's going to change. Again. Once we measured the passage of time by centuries or by the rise and fall of civilisations, or by the reigns of monarchs. In the first part of my life, things had already become so batshit we measured the passage of time in decades, the 70s, 80s and 90s etc were distinct enough from each other to pass as historical periods. What's next? Years? Maybe that bastard Alberjohns was right.
    The world is dreaming
    Your god is a demon
    And mine is a mountain of souls screaming

  3. #3
    Going by his cognitive skills, Alberjohns is ChatGPT.
    Hope is the denial of reality

  4. #4
    Quote Originally Posted by Steely Glint View Post
    From a tech perspective: My impression of trying to work with ChatGPT to get it to actually do something useful is that by the time you've explained to it precisely what you want you might as well have done it yourself. It's also wrong quite frequently, so you have to check its output manually anyway. I am more than impressed, however, by it's ability to understand natural language. It might get things wrong, but it rarely misunderstands what you're asking for. That is, after all, all it was ever supposed to be: a language model. I suspect its future lies in being a natural language interface for other systems, AI/ML or otherwise.
    There's already quite a bit happening on that front, and I think we're going to see a lot more AIs being slammed together to accomplish ridiculous things.

    I converted a spare room into an AI farm and it's been pretty useful. It's a mistake to think the generative models can do anything unsupervised - being wrong is a fundamental part of how they work and it's where their creativity comes from. But being pretty close has been a huge timesaver for me.

    I'm all in on AI right now, and I won't rest until you're all unemployed and unemployable.

  5. #5
    I'm unemployable right now!

    That said, I think the media has over-played how scary the "new AI" is. Sure, it can make me a picture of a duck hugging a kitten, but it's still not really there when it comes to actually innovative stuff like educating people, or writing whatever fan-fiction of Ariel Sharon that keeps Low-key employed.

    I also thought it was interesting that some university educators thought that ChatGPT would be an asset rather than a liability, because they thought it could be used to check on students' work rather than the kids having it do the work for them.
    In the future, the Berlin wall will be a mile high, and made of steel. You too will be made to crawl, to lick children's blood from jackboots. There will be no creativity, only productivity. Instead of love there will be fear and distrust, instead of surrender there will be submission. Contact will be replaced with isolation, and joy with shame. Hope will cease to exist as a concept. The Earth will be covered with steel and concrete. There will be an electronic policeman in every head. Your children will be born in chains, live only to serve, and die in anguish and ignorance.
    The universe we observe has precisely the properties we should expect if there is, at bottom, no design, no purpose, no evil, no good, nothing but blind, pitiless indifference.

  6. #6
    I cannot speak on this topic, I'm being watched....
    .

  7. #7
    If anyone wants to try it and hasn't seen it yet, I did leave an art-bot in the TWF discord.

  8. #8
    Quote Originally Posted by Steely Glint View Post
    From a tech perspective: My impression of trying to work with ChatGPT to get it to actually do something useful is that by the time you've explained to it precisely what you want you might as well have done it yourself. It's also wrong quite frequently, so you have to check its output manually anyway. I am more than impressed, however, by it's ability to understand natural language. It might get things wrong, but it rarely misunderstands what you're asking for. That is, after all, all it was ever supposed to be: a language model. I suspect its future lies in being a natural language interface for other systems, AI/ML or otherwise.

    From a social perspective: fucking chaos. Everything's going to change. Again. Once we measured the passage of time by centuries or by the rise and fall of civilisations, or by the reigns of monarchs. In the first part of my life, things had already become so batshit we measured the passage of time in decades, the 70s, 80s and 90s etc were distinct enough from each other to pass as historical periods. What's next? Years? Maybe that bastard Alberjohns was right.
    There's a lot of work being done on developing effective prompts for various use-cases, and, from what I've seen, having a good handle on how to use prompts makes a huge difference (input from friends working in marketing, UX/design, software engineering, data analysis). Future versions may be more accessible to lay people.

    Quote Originally Posted by Loki View Post
    Going by his cognitive skills, Alberjohns is ChatGPT.
    People like Alberjohns are gonna run the world into the ground

    Quote Originally Posted by Wraith View Post
    There's already quite a bit happening on that front, and I think we're going to see a lot more AIs being slammed together to accomplish ridiculous things.

    I converted a spare room into an AI farm and it's been pretty useful. It's a mistake to think the generative models can do anything unsupervised - being wrong is a fundamental part of how they work and it's where their creativity comes from. But being pretty close has been a huge timesaver for me.

    I'm all in on AI right now, and I won't rest until you're all unemployed and unemployable.
    Don't hold out on us what models are you running? What kind of hardware? Early impressions and insights? What's the bare minimum you need to get started?
    “Humanity's greatest advances are not in its discoveries, but in how those discoveries are applied to reduce inequity.”
    — Bill Gates

  9. #9
    Quote Originally Posted by Aimless View Post
    There's a lot of work being done on developing effective prompts for various use-cases, and, from what I've seen, having a good handle on how to use prompts makes a huge difference (input from friends working in marketing, UX/design, software engineering, data analysis). Future versions may be more accessible to lay people.
    It's not finding the right prompt that's the issue, it's getting it to understand the entire context of the problem I'm trying to solve. Without that, it's just faster googling. For example, on Monday I'll probably be looking at why a client's website is getting a low score on google pagespeed insights, and how we can improve that without losing functionality or spending a stupid amount of time rewriting everything. That's actually a problem that is very much within the wheelhouse of our current AI systems, but ChatGPT as it is right now sure as shit won't be able to tell me even if it was capable of looking at the website in question (which it isn't), it would just give me the same type of generic tips you get if you google those questions.
    The world is dreaming
    Your god is a demon
    And mine is a mountain of souls screaming

  10. #10
    I believe "intelligence" might be taken out of context. The intelligence is limited by the data the bot has access to. Feed it garbage and...well...it won't seem very intelligent to an educated person.
    .

  11. #11
    Quote Originally Posted by Being View Post
    I believe "intelligence" might be taken out of context. The intelligence is limited by the data the bot has access to. Feed it garbage and...well...it won't seem very intelligent to an educated person.
    That's not actually right. You can get away with feeding it garbage and let it sort things out for you. There's a lot more ways for things to be bad than good, so the garbage has a higher entropy and neural networks will seek out the lower entropy & better quality portions of the dataset over time. It also helps it train towards dealing with real world messiness, since real users will accidentally provide garbage input in use cases, and its training will let the AI suss out the intended input from the actual input.

  12. #12
    Quote Originally Posted by Aimless View Post
    Don't hold out on us what models are you running? What kind of hardware? Early impressions and insights? What's the bare minimum you need to get started?
    A ton of models, I'll grab anything that looks interesting or like it has something novel to offer. Mostly GPT-NeoX-20B in the recent past, though may switch to LLaMA or Alpaca soon. I'm still in the futzing around stage - there's a lot of running random experiments to see how I can accelerate training regimes, where the good starting points are, and where the right trade-off is between complexity and capability. I'm also struggling to keep up with the field since it's currently moving at lightspeed in a dozen different directions at once - it feels like huge breakthroughs are just a weekly thing now and I'm just trying not to fall further behind. The big players are also keeping their good shit locked up, so I have to build a lot of my own utilities which can take a while.

    Bare minimum to get started would depend on what you wanted to do. If you don't want to do any training yourself, you can use existing models to get pretty far even on lower-end hardware. The Alpaca model Stanford just released last week is supposed to be GPT-3 comparable and runnable on lower-end machines. Ideally you should have an Nvidia card for this, just because a lot of the existing support is Nvidia specific. I did some work with AMD and it can work fine, but you're basically on your own for writing all your scripts, interfaces, and utilities. Stable Diffusion checkpoints before 2 should also be pretty runnable with consumer hardware as long as you have at least 16 GB VRAM. If you have AMD then DirectML is at least at the point where it'll let you run it, though it's got memory leaks so you'll have to restart your apps regularly.

    I've currently got 2xA6000's w/48 GB VRAM connected via NVLink for AI. The VRAM is probably the most important part. Notably, this is not enough power to run GPT3.5 at reasonable speed - that needs 8x A100 80GB's if you want to train or a couple less if you want to run. The training set they used was total garbage though, and as Alpaca just proved it's possible to reach similar capability levels with smaller models if you have better data cleaning, and that and the training protocols to prevent overfitting or catastrophic forgetting is where I'm focused right now.

    Lately I've been leaving the artbot online in discord for free use. It can be a little slow if I've got any training happening, but the model it's using is small enough to not actually disrupt anything so I can just let it run.

  13. #13
    Quote Originally Posted by Steely Glint View Post
    It's not finding the right prompt that's the issue, it's getting it to understand the entire context of the problem I'm trying to solve. Without that, it's just faster googling. For example, on Monday I'll probably be looking at why a client's website is getting a low score on google pagespeed insights, and how we can improve that without losing functionality or spending a stupid amount of time rewriting everything. That's actually a problem that is very much within the wheelhouse of our current AI systems, but ChatGPT as it is right now sure as shit won't be able to tell me even if it was capable of looking at the website in question (which it isn't), it would just give me the same type of generic tips you get if you google those questions.
    Like everything, proper prompting is a skill, and apparently it's currently a very highly paid skill. Each of the scripts I use these days is mostly written by AI, usually I just ask it to solve an easier problem to get something like what I want then I can provide the tweaks to get it the rest of the way. Fine-tuned AIs are also vastly, vastly superior to generic ones like ChatGPT, even without much training time.

    ChatGPT's limitation of not being able to look at your website is an artificial one. Have you tried the developer portal instead of the chat one? That one may also be limited so promising nothing, but there are definitely alternatives available, and while the model ChatGPT's using is probably only well suited for looking at the text on the page, there are a number of other models available for understanding aesthetics or whatever. Pagespeed insight scores seems like something that would be really easy to train for, so I wouldn't be surprised if there were models out there somewhere to deal with that specifically.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •