Page 1 of 2 12 LastLast
Results 1 to 30 of 60

Thread: Something is wrong on the internet

  1. #1

    Default Something is wrong on the internet

    https://medium.com/@jamesbridle/some...t-c39c471271d2

    I recommend you real the article at the source for full links and embedded video but the text is here.

    Long story short, though, and striped of the vaguely conspiratorial tone of the article: Kids' Youtube is a big deal. Some channels accumulate views in the billions. Some other channels churn out this content en-mass clearly produced using automated systems, driven by algorithms, with the intention of maximising ad-revenue. Some other, different channels, run by, e.g. trolls and hilarious pranksters make videos using imaginary intended for children but with darker content, intended to disturb either as parody or 4chan-esque rakishness. And somewhere along the line, the disturbing finds it way into the algorithmically generated 'for children' videos.

    I’m James Bridle. I’m a writer and artist concerned with technology and culture. I usually write on my own blog, but frankly I don’t want what I’m talking about here anywhere near my own site. Please be advised: this essay describes disturbing things and links to disturbing graphic and video content. You don’t have to read it, and are advised to take caution exploring further.

    As someone who grew up on the internet, I credit it as one of the most important influences on who I am today. I had a computer with internet access in my bedroom from the age of 13. It gave me access to a lot of things which were totally inappropriate for a young teenager, but it was OK. The culture, politics, and interpersonal relationships which I consider to be central to my identity were shaped by the internet, in ways that I have always considered to be beneficial to me personally. I have always been a critical proponent of the internet and everything it has brought, and broadly considered it to be emancipatory and beneficial. I state this at the outset because thinking through the implications of the problem I am going to describe troubles my own assumptions and prejudices in significant ways.


    One of so-far hypothetical questions I ask myself frequently is how I would feel about my own children having the same kind of access to the internet today. And I find the question increasingly difficult to answer. I understand that this is a natural evolution of attitudes which happens with age, and at some point this question might be a lot less hypothetical. I don’t want to be a hypocrite about it. I would want my kids to have the same opportunities to explore and grow and express themselves as I did. I would like them to have that choice. And this belief broadens into attitudes about the role of the internet in public life as whole.


    I’ve also been aware for some time of the increasingly symbiotic relationship between younger children and YouTube. I see kids engrossed in screens all the time, in pushchairs and in restaurants, and there’s always a bit of a Luddite twinge there, but I am not a parent, and I’m not making parental judgments for or on anyone else. I’ve seen family members and friend’s children plugged into Peppa Pig and nursery rhyme videos, and it makes them happy and gives everyone a break, so OK.


    But I don’t even have kids and right now I just want to burn the whole thing down.


    Someone or something or some combination of people and things is using YouTube to systematically frighten, traumatise, and abuse children, automatically and at scale, and it forces me to question my own beliefs about the internet, at every level. Much of what I am going to describe next has been covered elsewhere, although none of the mainstream coverage I’ve seen has really grasped the implications of what seems to be occurring.
    To begin: Kid’s YouTube is definitely and markedly weird. I’ve been aware of its weirdness for some time. Last year, there were a number of articles posted about the Surprise Egg craze. Surprise Eggs videos depict, often at excruciating length, the process of unwrapping Kinder and other egg toys. That’s it, but kids are captivated by them. There are thousands and thousands of these videos and thousands and thousands, if not millions, of children watching them.


    From the article linked above:


    The maker of my particular favorite videos is “Blu Toys Surprise Brinquedos & Juegos,” and since 2010 he seems to have accrued 3.7 million subscribers and just under 6 billion views for a kid-friendly channel entirely devoted to opening surprise eggs and unboxing toys. The video titles are a continuous pattern of obscure branded lines and tie-ins: “Surprise Play Doh Eggs Peppa Pig Stamper Cars Pocoyo Minecraft Smurfs Kinder Play Doh Sparkle Brilho,” “Cars Screamin’ Banshee Eats Lightning McQueen Disney Pixar,” “Disney Baby Pop Up Pals Easter Eggs SURPRISE.”


    As I write this he has done a total of 4,426 videos and counting. With so many views — for comparison, Justin Bieber’s official channel has more than 10 billion views, while full-time YouTube celebrity PewDiePie has nearly 12 billion — it’s likely this man makes a living as a pair of gently murmuring hands that unwrap Kinder eggs. (Surprise-egg videos are all accompanied by pre-roll, and sometimes mid-video and ads.)
    That should give you some idea of just how odd the world of kids online video is, and that list of video titles hints at the extraordinary range and complexity of this situation. We’ll get into the latter in a minute; for the moment know that it’s already very strange, if apparently pretty harmless, out there.


    Another huge trope, especially the youngest children, is nursery rhyme videos.


    Little Baby Bum, which made the above video, is the 7th most popular channel on YouTube. With just 515 videos, they have accrued 11.5 million subscribers and 13 billion views. Again, there are questions as to the accuracy of these numbers, which I’ll get into shortly, but the key point is that this is a huge, huge network and industry.


    On-demand video is catnip to both parents and to children, and thus to content creators and advertisers. Small children are mesmerised by these videos, whether it’s familiar characters and songs, or simply bright colours and soothing sounds. The length of many of these videos — one common video tactic is to assemble many nursery rhyme or cartoon episodes into hour+ compilations —and the way that length is marketed as part of the video’s appeal, points to the amount of time some kids are spending with them.


    YouTube broadcasters have thus developed a huge number of tactics to draw parents’ and childrens’ attention to their videos, and the advertising revenues that accompany them. The first of these tactics is simply to copy and pirate other content. A simple search for “Peppa Pig” on YouTube in my case yielded “About 10,400,000 results” and the front page is almost entirely from the verified “Peppa Pig Official Channel”, while one is from an unverified channel called Play Go Toys, which you really wouldn’t notice unless you were looking out for it:


    Play Go Toys’ channel consists of (I guess?) pirated Peppa Pig and other cartoons, videos of toy unboxings (another kid magnet), and videos of, one supposes, the channel owner’s own children. I am not alleging anything bad about Play Go Toys; I am simply illustrating how the structure of YouTube facilitates the delamination of content and author, and how this impacts on our awareness and trust of its source.


    As another blogger notes, one of the traditional roles of branded content is that it is a trusted source. Whether it’s Peppa Pig on children’s TV or a Disney movie, whatever one’s feelings about the industrial model of entertainment production, they are carefully produced and monitored so that kids are essentially safe watching them, and can be trusted as such. This no longer applies when brand and content are disassociated by the platform, and so known and trusted content provides a seamless gateway to unverified and potentially harmful content.


    (Yes, this is the exact same process as the delamination of trusted news media on Facebook feeds and in Google results that is currently wreaking such havoc on our cognitive and political systems and I am not going to explicitly explore that relationship further here, but it is obviously deeply significant.)


    A second way of increasing hits on videos is through keyword/hashtag association, which is a whole dark art unto itself. When some trend, such as Surprise Egg videos, reaches critical mass, content producers pile onto it, creating thousands and thousands more of these videos in every possible iteration. This is the origin of all the weird names in the list above: branded content and nursery rhyme titles and “surprise egg” all stuffed into the same word salad to capture search results, sidebar placement, and “up next” autoplay rankings.


    A striking example of the weirdness is the Finger Family videos (harmless example embedded above). I have no idea where they came from or the origin of the children’s rhyme at the core of the trope, but there are at least 17 million versions of this currently on YouTube, and again they cover every possible genre, with billions and billions of aggregated views.


    Once again, the view numbers of these videos must be taken under serious advisement. A huge number of these videos are essentially created by bots and viewed by bots, and even commented on by bots. That is a whole strange world in and of itself. But it shouldn’t obscure that there are also many actual children, plugged into iphones and tablets, watching these over and over again — in part accounting for the inflated view numbers — learning to type basic search terms into the browser, or simply mashing the sidebar to bring up another video.

    What I find somewhat disturbing about the proliferation of even (relatively) normal kids videos is the impossibility of determining the degree of automation which is at work here; how to parse out the gap between human and machine. The example above, from a channel called Bounce Patrol Kids, with almost two million subscribers, show this effect in action. It posts professionally produced videos, with dedicated human actors, at the rate of about one per week. Once again, I am not alleging anything untoward about Bounce Patrol, which clearly follows in the footsteps of pre-digital kid sensations like their fellow Australians The Wiggles.
    And yet, there is something weird about a group of people endlessly acting out the implications of a combination of algorithmically generated keywords: “Halloween Finger Family & more Halloween Songs for Children | Kids Halloween Songs Collection”, “Australian Animals Finger Family Song | Finger Family Nursery Rhymes”, “Farm Animals Finger Family and more Animals Songs | Finger Family Collection - Learn Animals Sounds”, “Safari Animals Finger Family Song | Elephant, Lion, Giraffe, Zebra & Hippo! Wild Animals for kids”, “Superheroes Finger Family and more Finger Family Songs! Superhero Finger Family Collection”, “Batman Finger Family Song — Superheroes and Villains! Batman, Joker, Riddler, Catwoman” and on and on and on. This is content production in the age of algorithmic discovery — even if you’re a human, you have to end up impersonating the machine.


    Other channels do away with the human actors to create infinite reconfigurable versions of the same videos over and over again. What is occurring here is clearly automated. Stock animations, audio tracks, and lists of keywords being assembled in their thousands to produce an endless stream of videos. The above channel, Videogyan 3D Rhymes — Nursery Rhymes & Baby Songs, posts several videos a week, in increasingly byzantine combinations of keywords. They have almost five million subscribers — more than double Bounce Patrol — although once again it’s impossible to know who or what is actually racking up these millions and millions of views.
    I’m trying not to turn this essay into an endless list of examples, but it’s important to grasp how vast this system is, and how indeterminate its actions, process, and audience. It’s also international: there are variations of Finger Family and Learn Colours videos for Tamil epics and Malaysian cartoons which are unlikely to pop up in any Anglophone search results. This very indeterminacy and reach is key to its existence, and its implications. Its dimensionality makes it difficult to grasp, or even to really think about.


    We’ve encountered pretty clear examples of the disturbing outcomes of full automation before — some of which have been thankfully leavened with a dark kind of humour, others not so much. Much has been made of the algorithmic interbreeding of stock photo libraries and on-demand production of everything from tshirts to coffee mugs to infant onesies and cell phone covers. The above example, available until recently on Amazon, is one such case, and the story of how it came to occur is fascinating and weird but essentially comprehensible. Nobody set out to create phone cases with drugs and medical equipment on them, it was just a deeply weird mathematical/probabalistic outcome. The fact that it took a while to notice might ring some alarm bells however.


    Likewise, the case of the “Keep Calm and Rape A Lot” tshirts (along with the “Keep Calm and Knife Her” and “Keep Calm and Hit Her” ones) is depressing and distressing but comprehensible. Nobody set out to create these shirts: they just paired an unchecked list of verbs and pronouns with an online image generator. It’s quite possible that none of these shirts ever physically existed, were ever purchased or worn, and thus that no harm was done. Once again though, the people creating this content failed to notice, and neither did the distributor. They literally had no idea what they were doing.


    What I will argue, on the basis of these cases and of those I’m going to describe further, is that the scale and logic of the system is complicit in these outputs, and requires us to think through their implications.
    (Also again: I’m not going to dig into the wider social implications of such processes outside the scope of what I am writing about here, but it’s clear that one can draw a clear line from examples such as these to pressing contemporary issues such as racial and gender bias in big data and machine intelligence-driven systems, which require urgent attention but in the same manner do not have anything resembling easy or even preferable solutions.)


    Let’s look at just one video among the piles of kid videos, and try to parse out where it comes from. It’s important to stress that I didn’t set out to find this particular video: it appeared organically and highly ranked in a search for ‘finger family’ in an incognito browser window (i.e. it should not have been influenced by previous searches). This automation takes us to very, very strange places, and at this point the rabbithole is so deep that it’s impossible to know how such a thing came into being.


    Once again, a content warning: while not being explicitly inappropriate, the video is decidedly off, and contains elements which might trouble — frankly, should trouble — anyone. It’s very mild on the scale of such things, but. I describe it below if you don’t want to watch it. This warning will recur.


    The above video is entitled Wrong Heads Disney Wrong Ears Wrong Legs Kids Learn Colors Finger Family 2017 Nursery Rhymes. The title alone confirms its automated provenance. I have no idea where the “Wrong Heads” trope originates, but I can imagine, as with the Finger Family Song, that somewhere there is a totally original and harmless version that made enough kids laugh that it started to climb the algorithmic rankings until it made it onto the word salad lists, combining with Learn Colors, Finger Family, and Nursery Rhymes, and all of these tropes — not merely as words but as images, processes, and actions — to be mixed into what we see here.


    The video consists of a regular version of the Finger Family song played over an animation of character heads and bodies from Disney’s Aladdin swapping and intersecting. Again, this is weird but frankly no more than the Surprise Egg videos or anything else kids watch. I get how innocent it is. But the offness creeps in with the appearance of a non-Aladdin character —Agnes, the little girl from Despicable Me. This little girl does not swap heads or bodies, she just floats around the screen, sometimes laughing a little cartoon laugh, and sometimes, repeatedly, bursting into floods of tears.


    The video’s creator, BABYFUN TV (screenshot above), has produced many similar videos. As many of the Wrong Heads videos as I could bear to watch were all off in the same way. The character Hope from Inside Out weeps through a Smurfs and Trolls head swap. It goes on and on. BABYFUN TV only has 170 subscribers and very low view rates, but then there are thousands and thousands of channels like this. Numbers in the long tail aren’t significant in the abstract, but in their accumulation.


    The question becomes: how did this come to be? The “Bad Baby” trope also present on BABYFUN TV features the same crying. While I find it disturbing, I can understand how it might provide some of the rhythm or cadence or relation to their own experience that actual babies are attracted to in this content, although it has been warped and stretched through algorithmic repetition and recombination in ways that I don’t think anyone actually wants to happen.


    Toy Freaks is a hugely popular channel (68th on the platform) which features a father and his two daughters playing out — or in some cases perhaps originating — many of the tropes we’ve identified so far, including “Bad Baby”, above. As well as nursery rhymes and learning colours, Toy Freaks specialises in gross-out situations, as well as activities which many, many viewers feel border on abuse and exploitation, if not cross the line entirely, including videos of the children vomiting and in pain. Toy Freaks is a YouTube verified channel, whatever that means. (I think we know by now it means nothing useful.)


    As with Bounce Patrol Kids, however you feel about the content of these videos, it feels impossible to know where the automation starts and ends, who is coming up with the ideas and who is roleplaying them. In turn, the amplification of tropes in popular, human-led channels such as Toy Freaks leads to them being endlessly repeated across the network in increasingly outlandish and distorted recombinations.


    There’s a second level of what I’m characterising as human-led videos which are much more disturbing than the mostly distasteful activities of Toy Freaks and their kin. Here is a relatively mild, but still upsetting example:


    A step beyond the simply pirated Peppa Pig videos mentioned previously are the knock-offs. These too seem to teem with violence. In the official Peppa Pig videos, Peppa does indeed go to the dentist, and the episode in which she does so seems to be popular — although, confusingly, what appears to be the real episode is only available on an unofficial channel. In the official timeline, Peppa is appropriately reassured by a kindly dentist. In the version above, she is basically tortured, before turning into a series of Iron Man robots and performing the Learn Colours dance. A search for “peppa pig dentist” returns the above video on the front page, and it only gets worse from here.
    Disturbing Peppa Pig videos, which tend towards extreme violence and fear, with Peppa eating her father or drinking bleach, are, it turns out very widespread. They make up an entire YouTube subculture. Many are obviously parodies, or even satires of themselves, in the pretty common style of the internet’s outrageous, deliberately offensive kind. All the 4chan tropes are there, the trolls are out, we know this.


    In the example above, the agency is less clear: the video starts with a trollish Peppa parody, but later syncs into the kind of automated repetition of tropes we’ve seen already. I don’t know which camp it belongs to. On Twitter, it seems people are referring to some of these examples as #ElsaGate, connecting it directly to 4chan, and maybe it’s just trolls. I kind of hope it is. But I don’t think so. Trolls don’t cover the intersection of human actors and more automated examples further down the line. They’re at play here, but they’re not the whole story.


    I suppose it’s naive not to see the deliberate versions of this coming, but many are so close to the original, and so unsignposted — like the dentist example — that many, many kids are watching them. I understand that most of them are not trying to mess kids up, not really, even though they are.


    I’m trying to understand why, as plainly and simply troubling as it is, this is not a simple matter of “won’t somebody think of the children” hand-wringing. Obviously this content is inappropriate, obviously there are bad actors out there, obviously some of these videos should be removed. Obviously too this raises questions of fair use, appropriation, free speech and so on. But reports which simply understand the problem through this lens fail to fully grasp the mechanisms being deployed, and thus are incapable of thinking its implications in totality, and responding accordingly.


    The New York Times, headlining their article on a subset of this issue “On YouTube Kids, Startling Videos Slip Past Filters”, highlights the use of knock-off characters and nursery rhymes in disturbing content, and frames it as a problem of moderation and legislation. YouTube Kids, an official app which claims to be kid-safe but is quite obviously not, is the problem identified, because it wrongly engenders trust in users. An article in the British tabloid The Sun, “Kids left traumatised after sick YouTube clips showing Peppa Pig characters with knives and guns appear on app for children” takes the same line, with an added dose of right-wing technophobia and self-righteousness. But both stories take at face value YouTube’s assertions that these results are incredibly rare and quickly removed: assertions utterly refuted by the proliferation of the stories themselves, and the growing number of social media posts, largely by concerned parents, from which they arise.


    But as with Toy Freaks, what is concerning to me about the Peppa videos is how the obvious parodies and even the shadier knock-offs interact with the legions of algorithmic content producers until it is completely impossible to know what is going on. (“The creatures outside looked from pig to man, and from man to pig, and from pig to man again; but already it was impossible to say which was which.”)


    Good Baby Toys channel


    Here’s what is basically a version of Toy Freaks produced in Asia (screenshot above). Here’s one from Russia. I don’t really want to use the term “human-led” any more about these videos, although they contain all the same tropes and actual people acting them out. I no longer have any idea what’s going on here and I really don’t want to and I’m starting to think that that is kind of the point. That’s part of why I’m starting to think about the deliberateness of this all. There is a lot of effort going into making these. More than spam revenue can generate — can it? Who’s writing these scripts, editing these videos? Once again, I want to stress: this is still really mild, even funny stuff compared to a lot of what is out there.


    Here are a few things which are disturbing me:


    The first is the level of horror and violence on display. Some of the times it’s troll-y gross-out stuff; most of the time it seems deeper, and more unconscious than that. The internet has a way of amplifying and enabling many of our latent desires; in fact, it’s what it seems to do best. I spend a lot of time arguing for this tendency, with regards to human sexual freedom, individual identity, and other issues. Here, and overwhelmingly it sometimes feels, that tendency is itself a violent and destructive one.


    The second is the levels of exploitation, not of children because they are children but of children because they are powerless. Automated reward systems like YouTube algorithms necessitate exploitation in the same way that capitalism necessitates exploitation, and if you’re someone who bristles at the second half of that equation then maybe this should be what convinces you of its truth. Exploitation is encoded into the systems we are building, making it harder to see, harder to think and explain, harder to counter and defend against. Not in a future of AI overlords and robots in the factories, but right here, now, on your screen, in your living room and in your pocket.


    Many of these latest examples confound any attempt to argue that nobody is actually watching these videos, that these are all bots. There are humans in the loop here, even if only on the production side, and I’m pretty worried about them too.


    I’ve written enough, too much, but I feel like I actually need to justify all this raving about violence and abuse and automated systems with an example that sums it up. Maybe after everything I’ve said you won’t think it’s so bad. I don’t know what to think any more.


    This video, BURIED ALIVE Outdoor Playground Finger Family Song Nursery Rhymes Animation Education Learning Video, contains all of the elements we’ve covered above, and takes them to another level. Familiar characters, nursery tropes, keyword salad, full automation, violence, and the very stuff of kids’ worst dreams. And of course there are vast, vast numbers of these videos. Channel after channel after channel of similar content, churned out at the rate of hundreds of new videos every week. Industrialised nightmare production.


    For the final time: There is more violent and more sexual content like this available. I’m not going to link to it. I don’t believe in traumatising other people, but it’s necessary to keep stressing it, and not dismiss the psychological effect on children of things which aren’t overtly disturbing to adults, just incredibly dark and weird.


    A friend who works in digital video described to me what it would take to make something like this: a small studio of people (half a dozen, maybe more) making high volumes of low quality content to reap ad revenue by tripping certain requirements of the system (length in particular seems to be a factor). According to my friend, online kids’ content is one of the few alternative ways of making money from 3D animation because the aesthetic standards are lower and independent production can profit through scale. It uses existing and easily available content (such as character models and motion-capture libraries) and it can be repeated and revised endlessly and mostly meaninglessly because the algorithms don’t discriminate — and neither do the kids.


    These videos, wherever they are made, however they come to be made, and whatever their conscious intention (i.e. to accumulate ad revenue) are feeding upon a system which was consciously intended to show videos to children for profit. The unconsciously-generated, emergent outcomes of that are all over the place.


    To expose children to this content is abuse. We’re not talking about the debatable but undoubtedly real effects of film or videogame violence on teenagers, or the effects of pornography or extreme images on young minds, which were alluded to in my opening description of my own teenage internet use. Those are important debates, but they’re not what is being discussed here. What we’re talking about is very young children, effectively from birth, being deliberately targeted with content which will traumatise and disturb them, via networks which are extremely vulnerable to exactly this form of abuse. It’s not about trolls, but about a kind of violence inherent in the combination of digital systems and capitalist incentives. It’s down to that level of the metal.


    This, I think, is my point: The system is complicit in the abuse.


    And right now, right here, YouTube and Google are complicit in that system. The architecture they have built to extract the maximum revenue from online video is being hacked by persons unknown to abuse children, perhaps not even deliberately, but at a massive scale. I believe they have an absolute responsibility to deal with this, just as they have a responsibility to deal with the radicalisation of (mostly) young (mostly) men via extremist videos — of any political persuasion. They have so far showed absolutely no inclination to do this, which is in itself despicable. However, a huge part of my troubled response to this issue is that I have no idea how they can respond without shutting down the service itself, and most systems which resemble it. We have built a world which operates at scale, where human oversight is simply impossible, and no manner of inhuman oversight will counter most of the examples I’ve used in this essay. The asides I’ve kept in parentheses throughout, if expanded upon, would allow one with minimal effort to rewrite everything I’ve said, with very little effort, to be not about child abuse, but about white nationalism, about violent religious ideologies, about fake news, about climate denialism, about 9/11 conspiracies.


    This is a deeply dark time, in which the structures we have built to sustain ourselves are being used against us — all of us — in systematic and automated ways. It is hard to keep faith with the network when it produces horrors such as these. While it is tempting to dismiss the wilder examples as trolling, of which a significant number certainly are, that fails to account for the sheer volume of content weighted in a particularly grotesque direction. It presents many and complexly entangled dangers, including that, just as with the increasing focus on alleged Russian interference in social media, such events will be used as justification for increased control over the internet, increasing censorship, and so on. This is not what many of us want.


    I’m going to stop here, saying only this:


    What concerns me is not just the violence being done to children here, although that concerns me deeply. What concerns me is that this is just one aspect of a kind of infrastructural violence being done to all of us, all of the time, and we’re still struggling to find a way to even talk about it, to describe its mechanisms and its actions and its effects. As I said at the beginning of this essay: this is being done by people and by things and by a combination of things and people. Responsibility for its outcomes is impossible to assign but the damage is very, very real indeed.
    Fucking tech companies and their fucking half-assed algorithms, and premature machine-learning.

    All of the three major social media networks appear to be riddled with this kind of nonsense, and none of them seem to want to take the slightest responsibility for any of it.
    When the sky above us fell
    We descended into hell
    Into kingdom come

  2. #2
    I'm seeing this first hand. I have an old Nexus 7 (still the best bit of tech I've ever bought, btw) which I let my youngest (4) have a go on at the weekends. We started watching some things on YouTube together and then I realised she was watching it on her own (because my YouTube homepage was chock-full of recommendations for Peppa Pig...), so I installed Kid's YouTube instead.

    She's utterly hooked on it and loves watching really strange videos about kinder-egg style toys. Cutely narrated, it's basically just hours and hours of toy eggs being opened with toys inside.

    I've always though it was a bit weird, but that article has really made me realise just how weird and possibly sinister it all is. Time for it to go me thinks.

    Thanks.

  3. #3
    TL;DR - skim read only some points.

    My children love YouTube from time to time. We first discovered the "toy unboxing" phenomenon a couple of years ago while on holiday in Mexico. Our daughter Chloe was then about 18 months and was struggling with losing her bottle cold turkey (she'd chewed through her teets), in addition we were eating in a restaurant rather than at home. These toy unboxing videos are hypnotic and mesmerised her, allowing us to enjoy a meal or soothe her as she got used to drinking from a plastic glass rather than a bottle. Fast forward a couple of years and her sister is similarly mesmerised if we ever put these videos on, they're like magic.

    Chloe now is three and loves the Little Baby Bum channel. She loves nursery rhymes and would ask for her "songs" to be put on the TV which we do via the YouTube app on our sticks for the TVs. A lot of them are quite educational and she loves singing along counting the trains as they go by etc on the videos.

    If something inappropriate is on the internet then that is hardly news now is it? So long as they try their best its not the tech companies fault. We don't leave our children alone unsupervised and on the internet. If we put something online on then we control the remote and we are responsible. Its called being a parent.
    Quote Originally Posted by Ominous Gamer View Post
    ℬeing upset is understandable, but be upset at yourself for poor planning, not at the world by acting like a spoiled bitch during an interview.

  4. #4
    Quote Originally Posted by RandBlade View Post
    TL;DR - skim read only some points.

    My children love YouTube from time to time. We first discovered the "toy unboxing" phenomenon a couple of years ago while on holiday in Mexico. Our daughter Chloe was then about 18 months and was struggling with losing her bottle cold turkey (she'd chewed through her teets), in addition we were eating in a restaurant rather than at home. These toy unboxing videos are hypnotic and mesmerised her, allowing us to enjoy a meal or soothe her as she got used to drinking from a plastic glass rather than a bottle. Fast forward a couple of years and her sister is similarly mesmerised if we ever put these videos on, they're like magic.

    Chloe now is three and loves the Little Baby Bum channel. She loves nursery rhymes and would ask for her "songs" to be put on the TV which we do via the YouTube app on our sticks for the TVs. A lot of them are quite educational and she loves singing along counting the trains as they go by etc on the videos.

    If something inappropriate is on the internet then that is hardly news now is it? So long as they try their best its not the tech companies fault. We don't leave our children alone unsupervised and on the internet. If we put something online on then we control the remote and we are responsible. Its called being a parent.
    Steely is right, something is wrong on the internet. And you missed the point: big tech is NOT doing "their best" to protect consumers, not even children. They care about ad revenue, and do a crappy job vetting that (see Russian troll bots) as much as screening content.

    If you think "good parenting" is enough to counter-act the mess from the "magic" (your words), you missed another point: the internet has become a swamp that no human OR computer algorithm can maneuver, let alone manage.

    Censorship and Regulation is like a fourth rail that no one wants to touch, in the New Wild West.

  5. #5
    Quote Originally Posted by RandBlade View Post
    If something inappropriate is on the internet then that is hardly news now is it? So long as they try their best its not the tech companies fault. We don't leave our children alone unsupervised and on the internet. If we put something online on then we control the remote and we are responsible. Its called being a parent.
    It's their platform and they're responsible for what get's put on it.
    When the sky above us fell
    We descended into hell
    Into kingdom come

  6. #6
    Quote Originally Posted by Steely Glint View Post
    It's their platform and they're responsible for what get's put on it.
    Not entirely no they're not. Its an open platform, users are fully aware of that. If something gets reported and ignored then that's an issue but they're no more at fault for things slipping through the cracks than Wraith, Dread and I are responsible for the recent spam wave here.

    Quote Originally Posted by GGT View Post
    Steely is right, something is wrong on the internet. And you missed the point: big tech is NOT doing "their best" to protect consumers, not even children. They care about ad revenue, and do a crappy job vetting that (see Russian troll bots) as much as screening content.

    If you think "good parenting" is enough to counter-act the mess from the "magic" (your words), you missed another point: the internet has become a swamp that no human OR computer algorithm can maneuver, let alone manage.

    Censorship and Regulation is like a fourth rail that no one wants to touch, in the New Wild West.
    Your second paragraph counteracts your first. If no human or algorithm can manage it then how is it their fault that its not getting managed? Of course they care about ad revenue and that is a good thing not a bad thing, but they also care to do their best to screen things because if people stop using their system due to inadequate screening then that will hurt them. If parents start to boycott these platforms due to screening being that bad then that would be devastating to their bottom line.
    Quote Originally Posted by Ominous Gamer View Post
    ℬeing upset is understandable, but be upset at yourself for poor planning, not at the world by acting like a spoiled bitch during an interview.

  7. #7
    They are responsible for the algorithmically generated content. There's no way around that.
    "One day, we shall die. All the other days, we shall live."

  8. #8
    If its their algorithm of course yes. But they're not the ones generating it any more than Wraith is generating the spam bots algorithms.
    Quote Originally Posted by Ominous Gamer View Post
    ℬeing upset is understandable, but be upset at yourself for poor planning, not at the world by acting like a spoiled bitch during an interview.

  9. #9
    It's not their algorithms generating these iffy videos, but it's their search algorithms and business model that incentive the creation of those videos.
    When the sky above us fell
    We descended into hell
    Into kingdom come

  10. #10
    Anyone who provides opportunities will always encourage the incentivisation of problems. Doesn't mean we shouldn't get rid of the opportunities or shut everything down.

    The provision of email has encouraged all sorts of problems, the tech companies still haven't managed to get rid of all spam or viruses (though Google is very efficient at it now with Gmail). Are we to blame the companies for incentivising the creation of problematic emails?
    Quote Originally Posted by Ominous Gamer View Post
    ℬeing upset is understandable, but be upset at yourself for poor planning, not at the world by acting like a spoiled bitch during an interview.

  11. #11
    Quote Originally Posted by RandBlade View Post
    Anyone who provides opportunities will always encourage the incentivisation of problems. Doesn't mean we shouldn't get rid of the opportunities or shut everything down.
    Or maybe they could hire some people to moderate their platform, instead of farming it out to an algorithm, and then calling it a day? How about that?

    The provision of email has encouraged all sorts of problems, the tech companies still haven't managed to get rid of all spam or viruses (though Google is very efficient at it now with Gmail). Are we to blame the companies for incentivising the creation of problematic emails?
    Well, the companies didn't design and invent the protocols used in e-mail, the flaws in which have (arguably) contributed to the problem of spam, for example the lack of authentication in SMTP allowing for trivial spoofing of e-mail addresses.

    Like most of the raging security holes in the structure of the internets, this stems from it all having been made up pretty much on the fly in the 70s and the possible malicious uses simply not occurring to the people making the decisions, who to be fair to them also did not know that the system the were cobbling together would grow up to become a major method of global communication. Hindsight being 20-20, it's hard to really blame them. Plus it was the 70s, so they had enough on their plate as it is.

    But if SMTP were a proprietary system run by one company who were just refusing to update the protocol because they don't feel like paying anyone to do it, then they would indeed bare some of the responsibility if someone gets caught out by, e.g. a malicious attachment that looked like it was from someone on their contact list.

    Unfortunately, the attitude amongst the major tech companies is 'we just build the platform, we're not responsible for what happens on it', which is something we're all paying the price for at the moment.
    When the sky above us fell
    We descended into hell
    Into kingdom come

  12. #12
    Quote Originally Posted by Steely Glint View Post
    Or maybe they could hire some people to moderate their platform, instead of farming it out to an algorithm, and then calling it a day? How about that?
    Good idea. Except they already do have people who do that 24/7: https://support.google.com/youtubeki...30562?hl=en-GB
    Well, the companies didn't design and invent the protocols used in e-mail, the flaws in which have (arguably) contributed to the problem of spam, for example the lack of authentication in SMTP allowing for trivial spoofing of e-mail addresses.

    Like most of the raging security holes in the structure of the internets, this stems from it all having been made up pretty much on the fly in the 70s and the possible malicious uses simply not occurring to the people making the decisions, who to be fair to them also did not know that the system the were cobbling together would grow up to become a major method of global communication. Hindsight being 20-20, it's hard to really blame them. Plus it was the 70s, so they had enough on their plate as it is.
    Problems crop up in every development, unforeseen consequences are inevitable. How they're deal with is the key variable.
    But if SMTP were a proprietary system run by one company who were just refusing to update the protocol because they don't feel like paying anyone to do it, then they would indeed bare some of the responsibility if someone gets caught out by, e.g. a malicious attachment that looked like it was from someone on their contact list.
    Except the proprietary systems are updated continuously, which is why for non-proprietary systems we're using systems four decades out of date whereas for the major tech companies innovation doesn't stop.
    Unfortunately, the attitude amongst the major tech companies is 'we just build the platform, we're not responsible for what happens on it', which is something we're all paying the price for at the moment.
    Except its not.
    Quote Originally Posted by Ominous Gamer View Post
    ℬeing upset is understandable, but be upset at yourself for poor planning, not at the world by acting like a spoiled bitch during an interview.

  13. #13
    Stop making excuses for the corporate tech entities that rule the world, and have more power than any government, Rand.

  14. #14
    It's not excuses it's reality.

    Apart from anarchist anti-capitalist ravings about money how precisely should the tech companies fix this?

    There was a very reasonable suggestion earlier from Steely Glint that they should have human moderators - except they already do, 24/7.

    Is there anything else they're not already doing that they should be?
    Quote Originally Posted by Ominous Gamer View Post
    ℬeing upset is understandable, but be upset at yourself for poor planning, not at the world by acting like a spoiled bitch during an interview.

  15. #15
    They could start by taking responsibility or accountability for security flaws. Right now all that means is some apology after-the-fact, and promises to "do better". It seems they're taking advantage of our outdated and ineffective (or non-existent) regulatory measures, and are hoping they can continue to regulate themselves. As we all know that's a ludicrous proposal.

    So maybe they need to be regulated like any other utility? That might give consumers some recourse in the legal system to sue for damages. Sometimes only class-action law suits (and million dollar penalties) can force necessary changes.

  16. #16
    How about parents take responsibility for supervising their kids when they're on the internet and not just farm off their responsibilities to corporations?

    Its not ludicrous to self-regulate, it is in their corporate self-interest. If parents deem their company to be unsafe for kids then that harms their bottom line and that is far more of an incentive to be safe than any bureaucrat has ever come up with.

    I notice a stunning silence in regards to "what else should be done?" since "hire some people to moderate their platform" was pointed out to be something they have done already with people working to moderate "24/7".
    Quote Originally Posted by Ominous Gamer View Post
    ℬeing upset is understandable, but be upset at yourself for poor planning, not at the world by acting like a spoiled bitch during an interview.

  17. #17
    Quote Originally Posted by RandBlade View Post
    How about parents take responsibility for supervising their kids when they're on the internet and not just farm off their responsibilities to corporations?
    Rand, "the internet" isn't something parents can actually monitor 24/7. Adults can't even do that for themselves (see Equifax, Wells Fargo, Yahoo, and most recently Uber))!

    Its not ludicrous to self-regulate, it is in their corporate self-interest. If parents deem their company to be unsafe for kids then that harms their bottom line and that is far more of an incentive to be safe than any bureaucrat has ever come up with.
    Corporate Self-interest doesn't necessarily mean modern consumer protections. That's like saying baby crib manufacturers would have reduced the space between railings after the first injury or death, and made it an industry safety standard, because it was the right thing to do. Well, they didn't. In fact, it took a long time to force those safety changes via consumer protection agency regulations

    Name one industry that's been effective in regulating itself....

  18. #18
    Quote Originally Posted by RandBlade View Post
    Its not ludicrous to self-regulate, it is in their corporate self-interest.
    Its in their self-interest to do whatever increases views and thus revenue. Parent's simply "deeming" something unsafe doesn't cut it. It took weeks of public outcry and the issue spreading to far flung places like here before youtube took action. Why? Because one of the biggest "problem" channels was literally one of their biggest channels period, with close to 10 million subscribers. There is a reason this is a 20+ year old fucking joke.

    Its not just youtube either:
    "In a field where an overlooked bug could cost millions, you want people who will speak their minds, even if they’re sometimes obnoxious about it."

  19. #19
    Quote Originally Posted by GGT View Post
    Rand, "the internet" isn't something parents can actually monitor 24/7. Adults can't even do that for themselves (see Equifax, Wells Fargo, Yahoo, and most recently Uber))!
    Did I say that parents need to monitor the Internet 24/7? No I said they should supervise their children when they are on the Internet. Children don't need 24/7 access to the Internet.
    Quote Originally Posted by Ominous Gamer View Post
    ℬeing upset is understandable, but be upset at yourself for poor planning, not at the world by acting like a spoiled bitch during an interview.

  20. #20
    Quote Originally Posted by RandBlade View Post
    Did I say that parents need to monitor the Internet 24/7? No I said they should supervise their children when they are on the Internet. Children don't need 24/7 access to the Internet.
    Uh, they do need 24/7 access to the internet by the time they hit middle school, to do their required school homework on-line. Which is often way past their parents' bedtime. I forgot your kids are still quite young. Sorry to say, but you've got a rude awakening coming.

  21. #21
    I'd like to tweek Steely's OP: Something is wrong with the internet. At least, there's something wrong with how we view and treat the internet. We give "it" passes that no other industry is given. We tend to treat it as some special sector, that can't be expected to anticipate every problem, because it's "new technology".

    I suspect Edison said the same thing about the light bulb when electricity was new....but at some point electricity became a utility, and electricity providers (and light bulb manufacturers) were held to a higher standard.

    Do I need to mention the relationship between Edison and Ford, or the correlation between electricity, the automobile battery, the rubber tire, gas and petroleum products.....that evolved over time.....and was eventually regulated by governmental/legislative forces?

  22. #22
    Quote Originally Posted by GGT View Post
    Uh, they do need 24/7 access to the internet by the time they hit middle school, to do their required school homework on-line. Which is often way past their parents' bedtime. I forgot your kids are still quite young. Sorry to say, but you've got a rude awakening coming.
    No middle schoolers may need Internet access but not 24/7. They also won't be watching Peppa Pig etc which is what the article discusses. Peppa Pig is an animation aimed at preschoolers. YouTube Kids is also aimed at "young children".

    Similarly other things mentioned in the article the target market for them all is preschoolers and young children. Like Little Baby Bum mentioned as having 13 billion views, the clue is in the name.

    My children's demographic is the target market of what we are discussing and they don't need 24/7 unsupervised Internet access.
    Quote Originally Posted by Ominous Gamer View Post
    ℬeing upset is understandable, but be upset at yourself for poor planning, not at the world by acting like a spoiled bitch during an interview.

  23. #23
    Stingy DM Veldan Rath's Avatar
    Join Date
    Jan 2010
    Location
    Maine! And yes, we have plumbing!
    Posts
    3,064
    Quote Originally Posted by GGT View Post
    Uh, they do need 24/7 access to the internet by the time they hit middle school, to do their required school homework on-line. Which is often way past their parents' bedtime. I forgot your kids are still quite young. Sorry to say, but you've got a rude awakening coming.
    Your middle school kids stayed up later than you?
    Brevior saltare cum deformibus viris est vita

  24. #24
    I have always stayed up longer than everyone else.
    "One day, we shall die. All the other days, we shall live."

  25. #25
    And had unfettered, unsupervised access to everything?
    Quote Originally Posted by Ominous Gamer View Post
    ℬeing upset is understandable, but be upset at yourself for poor planning, not at the world by acting like a spoiled bitch during an interview.

  26. #26
    Quote Originally Posted by RandBlade View Post
    And had unfettered, unsupervised access to everything?
    We had no internet access in Bangladesh when I was two years old. I have always been up reading later than everyone else. Middle-school kids these days indeed tend to be up later than parents, at least judging from what I've heard from those of my colleagues who have kids that age. Whether or not you can blame that on schoolwork is uncertain but given how many extra-curricular activities they have I wouldn't be surprised.
    "One day, we shall die. All the other days, we shall live."

  27. #27
    Hence why I didn't say "to the internet", I said "to everything".

    When I was in Middle-School my parents restricted my access to all sorts of things that we did have. I wasn't able to watch TV 24/7, we had a computer (no internet) but I was only allowed to use it for limited time per day. The idea of unfettered 24/7 access to everything was off the cards.
    Quote Originally Posted by Ominous Gamer View Post
    ℬeing upset is understandable, but be upset at yourself for poor planning, not at the world by acting like a spoiled bitch during an interview.

  28. #28
    Quote Originally Posted by Ominous Gamer View Post
    Its in their self-interest to do whatever increases views and thus revenue. Parent's simply "deeming" something unsafe doesn't cut it. It took weeks of public outcry and the issue spreading to far flung places like here before youtube took action. Why? Because one of the biggest "problem" channels was literally one of their biggest channels period, with close to 10 million subscribers. There is a reason this is a 20+ year old fucking joke.

    Its not just youtube either:
    I really don't see why people expect anyone other than parents to police their children. Don't like little Johnny seeing something he shouldn't, monitor his usage or provide some other form of entertainment. It isn't like any 10 year old with a phone an an open browser can't find videos on-line of people literally dying (thanks ISIS). Though for some reason in America we are less concerned with that kind of video and far more concerned with a nipple but whatever the point remains the same.

  29. #29
    YouTube and Google punched in their bottom line, promise more urgent action: http://www.bbc.co.uk/news/technology-42110068

    A regulatory investigation would barely have got off the ground by now if that.
    Quote Originally Posted by Ominous Gamer View Post
    ℬeing upset is understandable, but be upset at yourself for poor planning, not at the world by acting like a spoiled bitch during an interview.

  30. #30
    "One day, we shall die. All the other days, we shall live."

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •