{"version": "1.0", "type": "rich", "title": "I appreciate the desire to do good better. With respect to EA priorities though, \"AI Alignment\" is pretty ludicrous to me. I...", "author_name": "kontextmaschine", "author_url": "https://kontextmaschine.com", "provider_name": "kontextmaschine", "provider_url": "https://kontextmaschine.com", "url": "https://kontextmaschine.com/post/701004939951013888/", "html": "<p><a class=\"tumblr_blog\" href=\"https://the-real-numbers-deactivated202.tumblr.com/post/700993646056570880\" target=\"_blank\">the-real-numbers-deactivated202</a>:</p><blockquote><p><a class=\"tumblr_blog\" href=\"https://argumate.tumblr.com/post/700985256509112320/nodding-wisely-oh-nobody-will-like-this\" target=\"_blank\">argumate</a>:</p><blockquote><p><a class=\"tumblr_blog\" href=\"https://www.tumblr.com/blog/view/fruityyamenrunner/700985160115617792\" target=\"_blank\">fruityyamenrunner</a>:</p><blockquote><p><a class=\"tumblr_blog\" href=\"https://the-real-numbers-deactivated202.tumblr.com/post/700979465115893760\" target=\"_blank\">the-real-numbers-deactivated202</a>:</p><blockquote><p>I appreciate the desire to do good better. With respect to EA priorities though, &ldquo;AI Alignment&rdquo; is pretty ludicrous to me. I really don&rsquo;t understand why unrealistically superintelligent AI developing the uncontrollable capability for harm is a realistic threat and nobody can give me an answer that isn&rsquo;t vague science fiction or a sanctimonious scolding. The subject of alignment could have changed, but last I checked it seemed pretty concerned with GAI and other far-out scenarios. I personally believe a lot of the GAI stuff is science fiction anxiety driving pascal&rsquo;s wager.</p><p>And there seem to be many real examples of far stupider machine learning algorithms being carelessly placed in a position to do harm. If there&rsquo;s any ML area of concern, it&rsquo;s AI Fairness, something which I think EAs in general tend to enjoy dunking on because it&rsquo;s not as grandiose as their pet projects or &ldquo;it&rsquo;s full of wokescolds&rdquo; or whatever. It&rsquo;s a terrible look. It makes me wonder if they&rsquo;re actually equipped to fairly judge long-term threats.</p></blockquote><p>the GAI they are afraid of is a psychological extrapolation of their own striving - they are relentlessly self-improving bourgeoises who are being chased by a hyena with multiple very parental looking heads asking them why they aren&rsquo;t making even more [loud superimposed string of phonemes].</p><p>someone i think here posted about how becoming initiated into amphetamine usage was an important transformation that let them identify an eigenvector in their hellvectorspace that remained pointed to [loud phoneme of desire], but there are other similar transformations too like &ldquo;getting a better programming job&rdquo;, &ldquo;networking with the bay area mafia&rdquo;, &ldquo;writing some software that automates and optimises a process&rdquo;. </p><p>attempting to put a combination of all of these transformations together into a single entity, which will be the perfect entity that will make professor mother therapist rabbi general hyena happy at its ability to obtain [loud phoneme catastrophe] ends up looking like &ldquo;a general artificial intelligence&rdquo; whatever that means.</p></blockquote><p>*nodding wisely* oh nobody will like this</p></blockquote><p>the prose poets have logged the fuck on</p></blockquote>\n<p>Oh<b> I </b>mostly understood it functionally, as a legitimating myth for spinning up a tech-autist equivalent of the &ldquo;nonprofit industrial complex&rdquo; to address elite overproduction and the unevenly distributed wealth of a startup economy </p>"}