Page 1 of 2 12 LastLast
Results 1 to 30 of 47

Thread: Visions of the GPT-4 era

  1. #1

    Default Visions of the GPT-4 era

    We're about to enter the era of GPT-4 (and its cousins obv).

    What do you think it'll look like for you? What'll it bring, apart from an unceasing torrent of brain-melting AI-porn?

    GPT-3 was thrilling; GPT-3.5 was ominous. What we know of GPT-4—and of the industry's current approach to AI—suggests a shitstorm of unforeseeable proportions in our near future. Trivial guardrails, breathtakingly irresponsible governance, limited transparency, zero oversight, and an unprecedented potential for cascading systemic fuckups facilitated by the dumbest weird nerds on the planet. The whitepaper and The coverage of openAI's processes in particular highlight multiple ethical and social risks, but I have no doubt MSFT, Google, Meta etc. will be happy to exacerbate those problems a few months from now.

    Full disclosure: I have friends who're working on making lots of money off of AI and/or incorporating AI into their current work activities. I think it's difficult to appreciate the existing capabilities of this technology without using it or seeing it in use in real-world applications. From a tech perspective, I'm an enthusiast; from a moral perspective, I'm ambivalent.
    "One day, we shall die. All the other days, we shall live."

  2. #2
    From a tech perspective: My impression of trying to work with ChatGPT to get it to actually do something useful is that by the time you've explained to it precisely what you want you might as well have done it yourself. It's also wrong quite frequently, so you have to check its output manually anyway. I am more than impressed, however, by it's ability to understand natural language. It might get things wrong, but it rarely misunderstands what you're asking for. That is, after all, all it was ever supposed to be: a language model. I suspect its future lies in being a natural language interface for other systems, AI/ML or otherwise.

    From a social perspective: fucking chaos. Everything's going to change. Again. Once we measured the passage of time by centuries or by the rise and fall of civilisations, or by the reigns of monarchs. In the first part of my life, things had already become so batshit we measured the passage of time in decades, the 70s, 80s and 90s etc were distinct enough from each other to pass as historical periods. What's next? Years? Maybe that bastard Alberjohns was right.
    When the sky above us fell
    We descended into hell
    Into kingdom come

  3. #3
    Going by his cognitive skills, Alberjohns is ChatGPT.
    Hope is the denial of reality

  4. #4
    Quote Originally Posted by Steely Glint View Post
    From a tech perspective: My impression of trying to work with ChatGPT to get it to actually do something useful is that by the time you've explained to it precisely what you want you might as well have done it yourself. It's also wrong quite frequently, so you have to check its output manually anyway. I am more than impressed, however, by it's ability to understand natural language. It might get things wrong, but it rarely misunderstands what you're asking for. That is, after all, all it was ever supposed to be: a language model. I suspect its future lies in being a natural language interface for other systems, AI/ML or otherwise.
    There's already quite a bit happening on that front, and I think we're going to see a lot more AIs being slammed together to accomplish ridiculous things.

    I converted a spare room into an AI farm and it's been pretty useful. It's a mistake to think the generative models can do anything unsupervised - being wrong is a fundamental part of how they work and it's where their creativity comes from. But being pretty close has been a huge timesaver for me.

    I'm all in on AI right now, and I won't rest until you're all unemployed and unemployable.

  5. #5
    I'm unemployable right now!

    That said, I think the media has over-played how scary the "new AI" is. Sure, it can make me a picture of a duck hugging a kitten, but it's still not really there when it comes to actually innovative stuff like educating people, or writing whatever fan-fiction of Ariel Sharon that keeps Low-key employed.

    I also thought it was interesting that some university educators thought that ChatGPT would be an asset rather than a liability, because they thought it could be used to check on students' work rather than the kids having it do the work for them.
    In the future, the Berlin wall will be a mile high, and made of steel. You too will be made to crawl, to lick children's blood from jackboots. There will be no creativity, only productivity. Instead of love there will be fear and distrust, instead of surrender there will be submission. Contact will be replaced with isolation, and joy with shame. Hope will cease to exist as a concept. The Earth will be covered with steel and concrete. There will be an electronic policeman in every head. Your children will be born in chains, live only to serve, and die in anguish and ignorance.
    The universe we observe has precisely the properties we should expect if there is, at bottom, no design, no purpose, no evil, no good, nothing but blind, pitiless indifference.

  6. #6
    I cannot speak on this topic, I'm being watched....
    Faith is Hope (see Loki's sig for details)
    If hindsight is 20-20, why is it so often ignored?

  7. #7
    If anyone wants to try it and hasn't seen it yet, I did leave an art-bot in the TWF discord.

  8. #8
    Quote Originally Posted by Steely Glint View Post
    From a tech perspective: My impression of trying to work with ChatGPT to get it to actually do something useful is that by the time you've explained to it precisely what you want you might as well have done it yourself. It's also wrong quite frequently, so you have to check its output manually anyway. I am more than impressed, however, by it's ability to understand natural language. It might get things wrong, but it rarely misunderstands what you're asking for. That is, after all, all it was ever supposed to be: a language model. I suspect its future lies in being a natural language interface for other systems, AI/ML or otherwise.

    From a social perspective: fucking chaos. Everything's going to change. Again. Once we measured the passage of time by centuries or by the rise and fall of civilisations, or by the reigns of monarchs. In the first part of my life, things had already become so batshit we measured the passage of time in decades, the 70s, 80s and 90s etc were distinct enough from each other to pass as historical periods. What's next? Years? Maybe that bastard Alberjohns was right.
    There's a lot of work being done on developing effective prompts for various use-cases, and, from what I've seen, having a good handle on how to use prompts makes a huge difference (input from friends working in marketing, UX/design, software engineering, data analysis). Future versions may be more accessible to lay people.

    Quote Originally Posted by Loki View Post
    Going by his cognitive skills, Alberjohns is ChatGPT.
    People like Alberjohns are gonna run the world into the ground

    Quote Originally Posted by Wraith View Post
    There's already quite a bit happening on that front, and I think we're going to see a lot more AIs being slammed together to accomplish ridiculous things.

    I converted a spare room into an AI farm and it's been pretty useful. It's a mistake to think the generative models can do anything unsupervised - being wrong is a fundamental part of how they work and it's where their creativity comes from. But being pretty close has been a huge timesaver for me.

    I'm all in on AI right now, and I won't rest until you're all unemployed and unemployable.
    Don't hold out on us what models are you running? What kind of hardware? Early impressions and insights? What's the bare minimum you need to get started?
    "One day, we shall die. All the other days, we shall live."

  9. #9
    Quote Originally Posted by Aimless View Post
    There's a lot of work being done on developing effective prompts for various use-cases, and, from what I've seen, having a good handle on how to use prompts makes a huge difference (input from friends working in marketing, UX/design, software engineering, data analysis). Future versions may be more accessible to lay people.
    It's not finding the right prompt that's the issue, it's getting it to understand the entire context of the problem I'm trying to solve. Without that, it's just faster googling. For example, on Monday I'll probably be looking at why a client's website is getting a low score on google pagespeed insights, and how we can improve that without losing functionality or spending a stupid amount of time rewriting everything. That's actually a problem that is very much within the wheelhouse of our current AI systems, but ChatGPT as it is right now sure as shit won't be able to tell me even if it was capable of looking at the website in question (which it isn't), it would just give me the same type of generic tips you get if you google those questions.
    When the sky above us fell
    We descended into hell
    Into kingdom come

  10. #10
    I believe "intelligence" might be taken out of context. The intelligence is limited by the data the bot has access to. Feed it garbage and...well...it won't seem very intelligent to an educated person.
    Faith is Hope (see Loki's sig for details)
    If hindsight is 20-20, why is it so often ignored?

  11. #11
    Quote Originally Posted by Being View Post
    I believe "intelligence" might be taken out of context. The intelligence is limited by the data the bot has access to. Feed it garbage and...well...it won't seem very intelligent to an educated person.
    That's not actually right. You can get away with feeding it garbage and let it sort things out for you. There's a lot more ways for things to be bad than good, so the garbage has a higher entropy and neural networks will seek out the lower entropy & better quality portions of the dataset over time. It also helps it train towards dealing with real world messiness, since real users will accidentally provide garbage input in use cases, and its training will let the AI suss out the intended input from the actual input.

  12. #12
    Quote Originally Posted by Aimless View Post
    Don't hold out on us what models are you running? What kind of hardware? Early impressions and insights? What's the bare minimum you need to get started?
    A ton of models, I'll grab anything that looks interesting or like it has something novel to offer. Mostly GPT-NeoX-20B in the recent past, though may switch to LLaMA or Alpaca soon. I'm still in the futzing around stage - there's a lot of running random experiments to see how I can accelerate training regimes, where the good starting points are, and where the right trade-off is between complexity and capability. I'm also struggling to keep up with the field since it's currently moving at lightspeed in a dozen different directions at once - it feels like huge breakthroughs are just a weekly thing now and I'm just trying not to fall further behind. The big players are also keeping their good shit locked up, so I have to build a lot of my own utilities which can take a while.

    Bare minimum to get started would depend on what you wanted to do. If you don't want to do any training yourself, you can use existing models to get pretty far even on lower-end hardware. The Alpaca model Stanford just released last week is supposed to be GPT-3 comparable and runnable on lower-end machines. Ideally you should have an Nvidia card for this, just because a lot of the existing support is Nvidia specific. I did some work with AMD and it can work fine, but you're basically on your own for writing all your scripts, interfaces, and utilities. Stable Diffusion checkpoints before 2 should also be pretty runnable with consumer hardware as long as you have at least 16 GB VRAM. If you have AMD then DirectML is at least at the point where it'll let you run it, though it's got memory leaks so you'll have to restart your apps regularly.

    I've currently got 2xA6000's w/48 GB VRAM connected via NVLink for AI. The VRAM is probably the most important part. Notably, this is not enough power to run GPT3.5 at reasonable speed - that needs 8x A100 80GB's if you want to train or a couple less if you want to run. The training set they used was total garbage though, and as Alpaca just proved it's possible to reach similar capability levels with smaller models if you have better data cleaning, that and the training protocols to prevent overfitting or catastrophic forgetting is where I'm focused right now.

    Lately I've been leaving the artbot online in Discord for free use. It can be a little slow if I've got any training happening, but the model it's using is small enough to not actually disrupt anything so I can just let it run for anyone who wants to play with it. You're more than welcome to throw words at it to see what pops out, I'll even tell it to give you the non-euclidian hands.
    Last edited by Wraith; 03-20-2023 at 05:09 PM.

  13. #13
    Quote Originally Posted by Steely Glint View Post
    It's not finding the right prompt that's the issue, it's getting it to understand the entire context of the problem I'm trying to solve. Without that, it's just faster googling. For example, on Monday I'll probably be looking at why a client's website is getting a low score on google pagespeed insights, and how we can improve that without losing functionality or spending a stupid amount of time rewriting everything. That's actually a problem that is very much within the wheelhouse of our current AI systems, but ChatGPT as it is right now sure as shit won't be able to tell me even if it was capable of looking at the website in question (which it isn't), it would just give me the same type of generic tips you get if you google those questions.
    Like everything, proper prompting is a skill, and apparently it's currently a very highly paid skill. Each of the scripts I use these days is mostly written by AI, usually I just ask it to solve an easier problem to get something like what I want then I can provide the tweaks to get it the rest of the way. Fine-tuned AIs are also vastly, vastly superior to generic ones like ChatGPT, even without much training time.

    ChatGPT's limitation of not being able to look at your website is an artificial one. Have you tried the developer portal instead of the chat one? That one may also be limited so promising nothing, but there are definitely alternatives available, and while the model ChatGPT's using is probably only well suited for looking at the text on the page, there are a number of other models available for understanding aesthetics or whatever. Pagespeed insight scores seems like something that would be really easy to train for, so I wouldn't be surprised if there were models out there somewhere to deal with that specifically.

  14. #14
    I think what I would want an AI to do is identify the parts of the code running on the site that are causing score to dip, then I can decide what to do about them: this bit here (file, line number(s)) blocks the critical path for x tenths of a second and that is subtracting y from the score. This requires an understanding of what the code does when executed and how it interacts with pagespeed insight's simulated rendering of a page and why something that might be devastating in one context might be completely fine in another.

    I don't think this is something ChatGPT would ever be able to do, but is something that an AI could be trained to do... but absent an off the shelf model that will do that reliably out of the box, it's simply faster for me to do the work myself than figure out how to make an AI do half the job for me. We only do a few of these a year, so those hours are unlikely to pay for themselves before the advent of AGI and the dawning of the singularity by around Christmas makes the whole economy irrelevant*.

    * joke**
    ** probably
    When the sky above us fell
    We descended into hell
    Into kingdom come

  15. #15
    Caved and did a ChatGPT at work the other day, taking my first step on the road to becoming an eloi.
    When the sky above us fell
    We descended into hell
    Into kingdom come

  16. #16
    Quote Originally Posted by Steely Glint View Post
    Caved and did a ChatGPT at work the other day, taking my first step on the road to becoming an eloi.
    I would honestly be hard pressed to find something useful for ChatGPT to do in my job. I've thought about various AI tasks (outside of highly specialized machine learning applications that are already used in my work) and the only thing I've come up with for a massive plagiarism engine is when I'm sometimes asked to come up with some literature to support a claim or provide background information, and I imagine ChatGPT et al might be marginally better than a search engine in producing a first draft reading list. But I only do that a few times a year.

    Most of what I do is relatively unstructured troubleshooting that would take far longer to provide a computer program enough information and context to be useful than to just figure it out myself. I routinely answer questions like 'why is our material now blue?' or 'how do we make this process faster without compromising performance properties?' or even 'what is the set of data that would be necessary to justify changing X process?' It requires so much contextual information - as well as so broad of an understanding of fundamental principles in chemistry, materials science, physiology, toxicology, mechanical engineering, and chemical engineering - that I think any answer you'd be likely to get would be absolute gibberish. I suppose I could have an AI generate a first draft of e.g. regulatory documents, but they are crafted with such care to nuances in language that I doubt we'd save much time.

    *shrugs* I view this as a curiosity more than anything else.
    "When I meet God, I am going to ask him two questions: Why relativity? And why turbulence? I really believe he will have an answer for the first." - Werner Heisenberg (maybe)

  17. #17
    I had to take an old class that had been written to connect to a FTP server with some PHP library that didn't support SFTP and rewrite it with another one that did. This is conceptually simple but requires a fair amount of keyboard dumping and pouring over reference material to figure out how functions are supposed to be used. The kind of thing I'd been looking for to try it out on.

    It made some mistakes and I had to nudge it in the right direction, but overall I think it was faster... probably? Maybe? I also asked it to describe the purpose of the code it was to rewrite first, so I was sure it knew what it was about, which it did.

    It definitely has use cases, but I don't think it's taking anyone's jobs away until GPT-12 is released some time in April.
    When the sky above us fell
    We descended into hell
    Into kingdom come

  18. #18
    Quote Originally Posted by Nessus View Post
    I'm unemployable right now!

    That said, I think the media has over-played how scary the "new AI" is. Sure, it can make me a picture of a duck hugging a kitten, but it's still not really there when it comes to actually innovative stuff like educating people, or writing whatever fan-fiction of Ariel Sharon that keeps Low-key employed.

    I also thought it was interesting that some university educators thought that ChatGPT would be an asset rather than a liability, because they thought it could be used to check on students' work rather than the kids having it do the work for them.
    good to hear from you

    And agreed I can't believe that the focus of generative AI is still...chatbots. Smarterchild is rising from his/her/their grave.

  19. #19
    I've been playing around with a few AI models at work. The image ones are amazing, automating a lot of work that used to takes semesters of Photoshop classes. Been heavily using the ones that auto remove watermarks from our theme park visits.

    ChatGPT at the moment reminds me of the old school internet with Alta Vista and Google. You do a search and the first couple of links are no BS answers you can quickly follow and understand. Not like using a search engine nowadays where you have to wade through link after link of SEO results. ChatGPT is what Google assistant and all those in home hubs should have been.

    Currently my wife is learning how to prompt it for VPK activities and having it write her lesson plans. I've been playing with ways to bend it's rules and dish insults, it really likes Shakespeare for some reason in that regard.

    It's odd how it can be so wrong on certain things. I've asked it to find the largest library in my county and it was wrong, I told it that it was wrong and it gave me another wrong answer, I prompted it something along the lines of comparing all libraries before responding and it gave me the right answer with the correct footage that was obviously much larger than it's first 2 answers.
    "In a field where an overlooked bug could cost millions, you want people who will speak their minds, even if they’re sometimes obnoxious about it."

  20. #20
    https://www.tiktok.com/t/ZTR3Pg2jy/

    ChatGPT sources are...fake.
    "In a field where an overlooked bug could cost millions, you want people who will speak their minds, even if they’re sometimes obnoxious about it."

  21. #21
    Quote Originally Posted by Ominous Gamer View Post
    https://www.tiktok.com/t/ZTR3Pg2jy/

    ChatGPT sources are...fake.
    That's how it's been from the outset! Scitate.ai and probably others are trying to fill that gap. Few days ago, ChatGPT lied to me over and over again abt M-theory
    "One day, we shall die. All the other days, we shall live."

  22. #22
    Microsoft gave me access to Designer, it's like a ripoff of Canva, but...AI powered and worse.

    It does however incorporate it's image creation AI really well. Been having a lot of fun prompting that. It gives off vibes of Google's DeepDream with how it handles eyes and fingers, but some of the stuff is impressively done. Coworker pointed out that unless prompted otherwise, most of the humanoids it creates are frowning or very obviously sad, which is a little freaky.
    "In a field where an overlooked bug could cost millions, you want people who will speak their minds, even if they’re sometimes obnoxious about it."

  23. #23
    Elon Musk says AI is being taught to lie.

    I pictured this example in my mind:
    Truth: y = m * x + b
    Biased: y = m * x
    Lie: y = 1
    I concluded that if AI is biased, it still could deliver something that approximates reality. But it AI lies, it becomes as useful as talking to a drunk guy.
    Freedom - When people learn to embrace criticism about politicians, since politicians are just employees like you and me.

  24. #24
    When first I appear... aoshi's Avatar
    Join Date
    Jan 2010
    Location
    Center Stage, Empty Auditorium
    Posts
    44
    My day job is as a teacher. It's going to make it very hard. I can catch plagiarism easily, but AI written work is going to be harder to catch until someone makes a software capable of detecting it reliably somehow. Kids these days have no writing skills at all and having this option is going to make them even dumber. This next generation is REALLY dumb. No attention span, no ability to write or think critically.

  25. #25
    Quote Originally Posted by aoshi View Post
    My day job is as a teacher. It's going to make it very hard. I can catch plagiarism easily, but AI written work is going to be harder to catch until someone makes a software capable of detecting it reliably somehow. Kids these days have no writing skills at all and having this option is going to make them even dumber. This next generation is REALLY dumb. No attention span, no ability to write or think critically.
    Gonna have to be proactive. People are going to use it—and use it inappropriately, without understanding the limitations. Might be better to teach them how to use AI tools appropriately (eg. one tool for quickly creating a plan/scaffold, another for references, yet another for quickly turning info into effective presentations, etc). I dunno. We can't avoid the future.
    "One day, we shall die. All the other days, we shall live."

  26. #26
    Good luck teaching kids who are borderline illiterate how to properly use complex tools.
    Hope is the denial of reality

  27. #27
    Quote Originally Posted by Loki View Post
    Good luck teaching kids who are borderline illiterate how to properly use complex tools.
    This is why calculators never caught on.
    In the future, the Berlin wall will be a mile high, and made of steel. You too will be made to crawl, to lick children's blood from jackboots. There will be no creativity, only productivity. Instead of love there will be fear and distrust, instead of surrender there will be submission. Contact will be replaced with isolation, and joy with shame. Hope will cease to exist as a concept. The Earth will be covered with steel and concrete. There will be an electronic policeman in every head. Your children will be born in chains, live only to serve, and die in anguish and ignorance.
    The universe we observe has precisely the properties we should expect if there is, at bottom, no design, no purpose, no evil, no good, nothing but blind, pitiless indifference.

  28. #28
    You don't need to know math to plug numbers into a calculator. You need to know how to write if you want ChatGPT to produce anything useful.
    Hope is the denial of reality

  29. #29
    I realize your moniker is the name-sake of deception, but please don't insult me like that, okay? We both know that it takes a little more than understanding integers to operate a graphical calculator, which were at the time predicted to destroy children's ability to learn!, and for all I know these days kids probably benefit from understanding what a for-loop is if they want to use Matlab or whatever it is their learning institutions provide.

    You may disagree with my assessment of my own capacity, but I think I have some understanding of writing, and you seem to selling these students fairly short. A prompt such as "tell me what intelligent design is" produces a readable if questionable response. If anything, I think you would've been more aghast at the racial profiling the AI does, since according to John Oliver it flat-out refuses to write stuff about Jews!
    In the future, the Berlin wall will be a mile high, and made of steel. You too will be made to crawl, to lick children's blood from jackboots. There will be no creativity, only productivity. Instead of love there will be fear and distrust, instead of surrender there will be submission. Contact will be replaced with isolation, and joy with shame. Hope will cease to exist as a concept. The Earth will be covered with steel and concrete. There will be an electronic policeman in every head. Your children will be born in chains, live only to serve, and die in anguish and ignorance.
    The universe we observe has precisely the properties we should expect if there is, at bottom, no design, no purpose, no evil, no good, nothing but blind, pitiless indifference.

  30. #30
    If you're going for that level of complexity, you could get an equally good answer by using Google.
    Hope is the denial of reality

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •