Results 1 to 10 of 10

Thread: Visions of the GPT-4 era

  1. #1

    Default Visions of the GPT-4 era

    We're about to enter the era of GPT-4 (and its cousins obv).

    What do you think it'll look like for you? What'll it bring, apart from an unceasing torrent of brain-melting AI-porn?

    GPT-3 was thrilling; GPT-3.5 was ominous. What we know of GPT-4—and of the industry's current approach to AI—suggests a shitstorm of unforeseeable proportions in our near future. Trivial guardrails, breathtakingly irresponsible governance, limited transparency, zero oversight, and an unprecedented potential for cascading systemic fuckups facilitated by the dumbest weird nerds on the planet. The whitepaper and The coverage of openAI's processes in particular highlight multiple ethical and social risks, but I have no doubt MSFT, Google, Meta etc. will be happy to exacerbate those problems a few months from now.

    Full disclosure: I have friends who're working on making lots of money off of AI and/or incorporating AI into their current work activities. I think it's difficult to appreciate the existing capabilities of this technology without using it or seeing it in use in real-world applications. From a tech perspective, I'm an enthusiast; from a moral perspective, I'm ambivalent.
    “Humanity's greatest advances are not in its discoveries, but in how those discoveries are applied to reduce inequity.”
    — Bill Gates

  2. #2
    From a tech perspective: My impression of trying to work with ChatGPT to get it to actually do something useful is that by the time you've explained to it precisely what you want you might as well have done it yourself. It's also wrong quite frequently, so you have to check its output manually anyway. I am more than impressed, however, by it's ability to understand natural language. It might get things wrong, but it rarely misunderstands what you're asking for. That is, after all, all it was ever supposed to be: a language model. I suspect its future lies in being a natural language interface for other systems, AI/ML or otherwise.

    From a social perspective: fucking chaos. Everything's going to change. Again. Once we measured the passage of time by centuries or by the rise and fall of civilisations, or by the reigns of monarchs. In the first part of my life, things had already become so batshit we measured the passage of time in decades, the 70s, 80s and 90s etc were distinct enough from each other to pass as historical periods. What's next? Years? Maybe that bastard Alberjohns was right.
    The world is dreaming
    Your god is a demon
    And mine is a mountain of souls screaming

  3. #3
    Going by his cognitive skills, Alberjohns is ChatGPT.
    Hope is the denial of reality

  4. #4
    Quote Originally Posted by Steely Glint View Post
    From a tech perspective: My impression of trying to work with ChatGPT to get it to actually do something useful is that by the time you've explained to it precisely what you want you might as well have done it yourself. It's also wrong quite frequently, so you have to check its output manually anyway. I am more than impressed, however, by it's ability to understand natural language. It might get things wrong, but it rarely misunderstands what you're asking for. That is, after all, all it was ever supposed to be: a language model. I suspect its future lies in being a natural language interface for other systems, AI/ML or otherwise.
    There's already quite a bit happening on that front, and I think we're going to see a lot more AIs being slammed together to accomplish ridiculous things.

    I converted a spare room into an AI farm and it's been pretty useful. It's a mistake to think the generative models can do anything unsupervised - being wrong is a fundamental part of how they work and it's where their creativity comes from. But being pretty close has been a huge timesaver for me.

    I'm all in on AI right now, and I won't rest until you're all unemployed and unemployable.

  5. #5
    I'm unemployable right now!

    That said, I think the media has over-played how scary the "new AI" is. Sure, it can make me a picture of a duck hugging a kitten, but it's still not really there when it comes to actually innovative stuff like educating people, or writing whatever fan-fiction of Ariel Sharon that keeps Low-key employed.

    I also thought it was interesting that some university educators thought that ChatGPT would be an asset rather than a liability, because they thought it could be used to check on students' work rather than the kids having it do the work for them.
    In the future, the Berlin wall will be a mile high, and made of steel. You too will be made to crawl, to lick children's blood from jackboots. There will be no creativity, only productivity. Instead of love there will be fear and distrust, instead of surrender there will be submission. Contact will be replaced with isolation, and joy with shame. Hope will cease to exist as a concept. The Earth will be covered with steel and concrete. There will be an electronic policeman in every head. Your children will be born in chains, live only to serve, and die in anguish and ignorance.
    The universe we observe has precisely the properties we should expect if there is, at bottom, no design, no purpose, no evil, no good, nothing but blind, pitiless indifference.

  6. #6
    I cannot speak on this topic, I'm being watched....
    .

  7. #7
    If anyone wants to try it and hasn't seen it yet, I did leave an art-bot in the TWF discord.

  8. #8
    Quote Originally Posted by Steely Glint View Post
    From a tech perspective: My impression of trying to work with ChatGPT to get it to actually do something useful is that by the time you've explained to it precisely what you want you might as well have done it yourself. It's also wrong quite frequently, so you have to check its output manually anyway. I am more than impressed, however, by it's ability to understand natural language. It might get things wrong, but it rarely misunderstands what you're asking for. That is, after all, all it was ever supposed to be: a language model. I suspect its future lies in being a natural language interface for other systems, AI/ML or otherwise.

    From a social perspective: fucking chaos. Everything's going to change. Again. Once we measured the passage of time by centuries or by the rise and fall of civilisations, or by the reigns of monarchs. In the first part of my life, things had already become so batshit we measured the passage of time in decades, the 70s, 80s and 90s etc were distinct enough from each other to pass as historical periods. What's next? Years? Maybe that bastard Alberjohns was right.
    There's a lot of work being done on developing effective prompts for various use-cases, and, from what I've seen, having a good handle on how to use prompts makes a huge difference (input from friends working in marketing, UX/design, software engineering, data analysis). Future versions may be more accessible to lay people.

    Quote Originally Posted by Loki View Post
    Going by his cognitive skills, Alberjohns is ChatGPT.
    People like Alberjohns are gonna run the world into the ground

    Quote Originally Posted by Wraith View Post
    There's already quite a bit happening on that front, and I think we're going to see a lot more AIs being slammed together to accomplish ridiculous things.

    I converted a spare room into an AI farm and it's been pretty useful. It's a mistake to think the generative models can do anything unsupervised - being wrong is a fundamental part of how they work and it's where their creativity comes from. But being pretty close has been a huge timesaver for me.

    I'm all in on AI right now, and I won't rest until you're all unemployed and unemployable.
    Don't hold out on us what models are you running? What kind of hardware? Early impressions and insights? What's the bare minimum you need to get started?
    “Humanity's greatest advances are not in its discoveries, but in how those discoveries are applied to reduce inequity.”
    — Bill Gates

  9. #9
    Quote Originally Posted by Aimless View Post
    There's a lot of work being done on developing effective prompts for various use-cases, and, from what I've seen, having a good handle on how to use prompts makes a huge difference (input from friends working in marketing, UX/design, software engineering, data analysis). Future versions may be more accessible to lay people.
    It's not finding the right prompt that's the issue, it's getting it to understand the entire context of the problem I'm trying to solve. Without that, it's just faster googling. For example, on Monday I'll probably be looking at why a client's website is getting a low score on google pagespeed insights, and how we can improve that without losing functionality or spending a stupid amount of time rewriting everything. That's actually a problem that is very much within the wheelhouse of our current AI systems, but ChatGPT as it is right now sure as shit won't be able to tell me even if it was capable of looking at the website in question (which it isn't), it would just give me the same type of generic tips you get if you google those questions.
    The world is dreaming
    Your god is a demon
    And mine is a mountain of souls screaming

  10. #10
    I believe "intelligence" might be taken out of context. The intelligence is limited by the data the bot has access to. Feed it garbage and...well...it won't seem very intelligent to an educated person.
    .

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •