Im Incredibly Envious Of His Masculinity / Inspired By Horikoshis Drawing



im incredibly envious of his masculinity / inspired by horikoshi’s drawing
do not repost, reblog only
twitter | ig | commissions | prints | ko-fi
original sketch:
Keep reading
-
queengmine2crayon liked this · 5 months ago
-
trashskeleton-san liked this · 5 months ago
-
theghost-robin liked this · 5 months ago
-
aika-noe liked this · 5 months ago
-
siulongbao liked this · 5 months ago
-
meep0221-blog liked this · 5 months ago
-
saranootnoot liked this · 6 months ago
-
eyesonlily liked this · 6 months ago
-
cryiling liked this · 6 months ago
-
isamnot liked this · 6 months ago
-
zee-nith liked this · 6 months ago
-
oldmanbulletbones liked this · 6 months ago
-
oh-fuckberries reblogged this · 6 months ago
-
tommyboyhere liked this · 6 months ago
-
kerosenedream liked this · 6 months ago
-
taishitori liked this · 6 months ago
-
elizabats-is-trying liked this · 7 months ago
-
different-interests liked this · 7 months ago
-
pyro-catalyst liked this · 7 months ago
-
catjellyfishbirdseahorse reblogged this · 7 months ago
-
catjellyfishbirdseahorse liked this · 7 months ago
-
nightstarguide reblogged this · 7 months ago
-
noinfoknow liked this · 7 months ago
-
camusismymuse liked this · 7 months ago
-
eternalducky liked this · 7 months ago
-
amanda-panda92 liked this · 7 months ago
-
marythecloud94 liked this · 7 months ago
-
fishtank-12 liked this · 7 months ago
-
thytorturedpoet liked this · 7 months ago
-
mossimpersonator liked this · 7 months ago
-
birdsandbeetlesandmoths liked this · 7 months ago
-
tumblasha liked this · 7 months ago
-
artenthusiast19 reblogged this · 7 months ago
-
artenthusiast19 liked this · 7 months ago
-
mackerel1522 liked this · 7 months ago
-
str-0507 liked this · 7 months ago
-
hydranomago liked this · 7 months ago
-
hadalimaaa liked this · 7 months ago
-
mellary liked this · 7 months ago
-
kkyutie liked this · 8 months ago
-
inaemeeme liked this · 8 months ago
-
slothfromthemoon liked this · 8 months ago
-
satorugojoseyes liked this · 8 months ago
-
myheroagonizing reblogged this · 8 months ago
-
myheroagonizing liked this · 8 months ago
-
quillborg liked this · 8 months ago
-
beksdeks reblogged this · 8 months ago
-
michiipurin liked this · 8 months ago
-
number1koraidonenjoyer liked this · 8 months ago
More Posts from Leftcheesecakephilosopher
I am genuinely terrified of the way working full time absolutely kills your creativity/brain capacity/zest for life/etc
frankly I think a lot more people would be open to postmodern art if we all stopped pretending you had to be very smart to understand it and start acknowledging that the starting point for deriving meaning from it is frequently ‘this is stupid bullshit’
The moon
fucks me up that by total coincidence the sun and moon's size difference is exactly matched to their difference in distance from us, thus making our beautiful total solar eclipses where you can see the silver threads of the sun's corona possible because the moon just covers the sun completely
The stars (literally) aligned just right for this experience to be possible. It's likely that aliens don't have this
What kind of bubble is AI?

My latest column for Locus Magazine is "What Kind of Bubble is AI?" All economic bubbles are hugely destructive, but some of them leave behind wreckage that can be salvaged for useful purposes, while others leave nothing behind but ashes:
https://locusmag.com/2023/12/commentary-cory-doctorow-what-kind-of-bubble-is-ai/
Think about some 21st century bubbles. The dotcom bubble was a terrible tragedy, one that drained the coffers of pension funds and other institutional investors and wiped out retail investors who were gulled by Superbowl Ads. But there was a lot left behind after the dotcoms were wiped out: cheap servers, office furniture and space, but far more importantly, a generation of young people who'd been trained as web makers, leaving nontechnical degree programs to learn HTML, perl and python. This created a whole cohort of technologists from non-technical backgrounds, a first in technological history. Many of these people became the vanguard of a more inclusive and humane tech development movement, and they were able to make interesting and useful services and products in an environment where raw materials – compute, bandwidth, space and talent – were available at firesale prices.
Contrast this with the crypto bubble. It, too, destroyed the fortunes of institutional and individual investors through fraud and Superbowl Ads. It, too, lured in nontechnical people to learn esoteric disciplines at investor expense. But apart from a smattering of Rust programmers, the main residue of crypto is bad digital art and worse Austrian economics.
Or think of Worldcom vs Enron. Both bubbles were built on pure fraud, but Enron's fraud left nothing behind but a string of suspicious deaths. By contrast, Worldcom's fraud was a Big Store con that required laying a ton of fiber that is still in the ground to this day, and is being bought and used at pennies on the dollar.
AI is definitely a bubble. As I write in the column, if you fly into SFO and rent a car and drive north to San Francisco or south to Silicon Valley, every single billboard is advertising an "AI" startup, many of which are not even using anything that can be remotely characterized as AI. That's amazing, considering what a meaningless buzzword AI already is.
So which kind of bubble is AI? When it pops, will something useful be left behind, or will it go away altogether? To be sure, there's a legion of technologists who are learning Tensorflow and Pytorch. These nominally open source tools are bound, respectively, to Google and Facebook's AI environments:
https://pluralistic.net/2023/08/18/openwashing/#you-keep-using-that-word-i-do-not-think-it-means-what-you-think-it-means
But if those environments go away, those programming skills become a lot less useful. Live, large-scale Big Tech AI projects are shockingly expensive to run. Some of their costs are fixed – collecting, labeling and processing training data – but the running costs for each query are prodigious. There's a massive primary energy bill for the servers, a nearly as large energy bill for the chillers, and a titanic wage bill for the specialized technical staff involved.
Once investor subsidies dry up, will the real-world, non-hyperbolic applications for AI be enough to cover these running costs? AI applications can be plotted on a 2X2 grid whose axes are "value" (how much customers will pay for them) and "risk tolerance" (how perfect the product needs to be).
Charging teenaged D&D players $10 month for an image generator that creates epic illustrations of their characters fighting monsters is low value and very risk tolerant (teenagers aren't overly worried about six-fingered swordspeople with three pupils in each eye). Charging scammy spamfarms $500/month for a text generator that spits out dull, search-algorithm-pleasing narratives to appear over recipes is likewise low-value and highly risk tolerant (your customer doesn't care if the text is nonsense). Charging visually impaired people $100 month for an app that plays a text-to-speech description of anything they point their cameras at is low-value and moderately risk tolerant ("that's your blue shirt" when it's green is not a big deal, while "the street is safe to cross" when it's not is a much bigger one).
Morganstanley doesn't talk about the trillions the AI industry will be worth some day because of these applications. These are just spinoffs from the main event, a collection of extremely high-value applications. Think of self-driving cars or radiology bots that analyze chest x-rays and characterize masses as cancerous or noncancerous.
These are high value – but only if they are also risk-tolerant. The pitch for self-driving cars is "fire most drivers and replace them with 'humans in the loop' who intervene at critical junctures." That's the risk-tolerant version of self-driving cars, and it's a failure. More than $100b has been incinerated chasing self-driving cars, and cars are nowhere near driving themselves:
https://pluralistic.net/2022/10/09/herbies-revenge/#100-billion-here-100-billion-there-pretty-soon-youre-talking-real-money
Quite the reverse, in fact. Cruise was just forced to quit the field after one of their cars maimed a woman – a pedestrian who had not opted into being part of a high-risk AI experiment – and dragged her body 20 feet through the streets of San Francisco. Afterwards, it emerged that Cruise had replaced the single low-waged driver who would normally be paid to operate a taxi with 1.5 high-waged skilled technicians who remotely oversaw each of its vehicles:
https://www.nytimes.com/2023/11/03/technology/cruise-general-motors-self-driving-cars.html
The self-driving pitch isn't that your car will correct your own human errors (like an alarm that sounds when you activate your turn signal while someone is in your blind-spot). Self-driving isn't about using automation to augment human skill – it's about replacing humans. There's no business case for spending hundreds of billions on better safety systems for cars (there's a human case for it, though!). The only way the price-tag justifies itself is if paid drivers can be fired and replaced with software that costs less than their wages.
What about radiologists? Radiologists certainly make mistakes from time to time, and if there's a computer vision system that makes different mistakes than the sort that humans make, they could be a cheap way of generating second opinions that trigger re-examination by a human radiologist. But no AI investor thinks their return will come from selling hospitals that reduce the number of X-rays each radiologist processes every day, as a second-opinion-generating system would. Rather, the value of AI radiologists comes from firing most of your human radiologists and replacing them with software whose judgments are cursorily double-checked by a human whose "automation blindness" will turn them into an OK-button-mashing automaton:
https://pluralistic.net/2023/08/23/automation-blindness/#humans-in-the-loop
The profit-generating pitch for high-value AI applications lies in creating "reverse centaurs": humans who serve as appendages for automation that operates at a speed and scale that is unrelated to the capacity or needs of the worker:
https://pluralistic.net/2022/04/17/revenge-of-the-chickenized-reverse-centaurs/
But unless these high-value applications are intrinsically risk-tolerant, they are poor candidates for automation. Cruise was able to nonconsensually enlist the population of San Francisco in an experimental murderbot development program thanks to the vast sums of money sloshing around the industry. Some of this money funds the inevitabilist narrative that self-driving cars are coming, it's only a matter of when, not if, and so SF had better get in the autonomous vehicle or get run over by the forces of history.
Once the bubble pops (all bubbles pop), AI applications will have to rise or fall on their actual merits, not their promise. The odds are stacked against the long-term survival of high-value, risk-intolerant AI applications.
The problem for AI is that while there are a lot of risk-tolerant applications, they're almost all low-value; while nearly all the high-value applications are risk-intolerant. Once AI has to be profitable – once investors withdraw their subsidies from money-losing ventures – the risk-tolerant applications need to be sufficient to run those tremendously expensive servers in those brutally expensive data-centers tended by exceptionally expensive technical workers.
If they aren't, then the business case for running those servers goes away, and so do the servers – and so do all those risk-tolerant, low-value applications. It doesn't matter if helping blind people make sense of their surroundings is socially beneficial. It doesn't matter if teenaged gamers love their epic character art. It doesn't even matter how horny scammers are for generating AI nonsense SEO websites:
https://twitter.com/jakezward/status/1728032634037567509
These applications are all riding on the coattails of the big AI models that are being built and operated at a loss in order to be profitable. If they remain unprofitable long enough, the private sector will no longer pay to operate them.
Now, there are smaller models, models that stand alone and run on commodity hardware. These would persist even after the AI bubble bursts, because most of their costs are setup costs that have already been borne by the well-funded companies who created them. These models are limited, of course, though the communities that have formed around them have pushed those limits in surprising ways, far beyond their original manufacturers' beliefs about their capacity. These communities will continue to push those limits for as long as they find the models useful.
These standalone, "toy" models are derived from the big models, though. When the AI bubble bursts and the private sector no longer subsidizes mass-scale model creation, it will cease to spin out more sophisticated models that run on commodity hardware (it's possible that Federated learning and other techniques for spreading out the work of making large-scale models will fill the gap).
So what kind of bubble is the AI bubble? What will we salvage from its wreckage? Perhaps the communities who've invested in becoming experts in Pytorch and Tensorflow will wrestle them away from their corporate masters and make them generally useful. Certainly, a lot of people will have gained skills in applying statistical techniques.
But there will also be a lot of unsalvageable wreckage. As big AI models get integrated into the processes of the productive economy, AI becomes a source of systemic risk. The only thing worse than having an automated process that is rendered dangerous or erratic based on AI integration is to have that process fail entirely because the AI suddenly disappeared, a collapse that is too precipitous for former AI customers to engineer a soft landing for their systems.
This is a blind spot in our policymakers debates about AI. The smart policymakers are asking questions about fairness, algorithmic bias, and fraud. The foolish policymakers are ensnared in fantasies about "AI safety," AKA "Will the chatbot become a superintelligence that turns the whole human race into paperclips?"
https://pluralistic.net/2023/11/27/10-types-of-people/#taking-up-a-lot-of-space
But no one is asking, "What will we do if" – when – "the AI bubble pops and most of this stuff disappears overnight?"

If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2023/12/19/bubblenomics/#pop

Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
--
tom_bullock (modified) https://www.flickr.com/photos/tombullock/25173469495/
CC BY 2.0 https://creativecommons.org/licenses/by/2.0/
Fanfic Thieves on Youtube
A collection of youtube channels have been uploading preexisting fanfictions in videos with little to no credit to the original authors. These are not podfics, these channels copy-paste the fics into text-to-speech readers then upload the unaltered audio over static or unrelated backgrounds, either art that is also stolen or mobile game footage. In addition to not naming the authors, they alter the title to make it that much harder for readers to recognize or find the original uploads. Some go so far as to pretend they themselves are creating the fics in question. Many claim that their stealing actually helps give fics "exposure" despite the intentional steps they take to conceal the origins of the fics they profit off of. However, this practice has lead many authors to discontinue fics after the frustration of having their hard work stolen. Many of these channels claim they will remove videos upon request, but will either argue with the author in order to keep it up, or simply unlist the video for a time until they think the author isn't paying attention anymore. And their solution to receiving strikes against their channels in the past has been to further obfuscate the origins of their content instead of even considering asking first.



”I got caught stealing, so instead of not stealing anymore, I’m doubling down on stealing even more so it’s harder for people to find out and prove I’m stealing. Stealing doesn't count if the specific person I stole from didn't call me out. I am the real victim.”
That, plus the incessant tag scumming in all the videos (spamming unrelated tags in order to appear in more search results) proves to me that these are lazy attention seekers who don't want to put in creative effort when they could just leech off of the passion of others.
In order to report them, go to their channel's "About" page and click the flag icon. Said icon might be behind the three dots in the top bar on mobile. Go to "Report User" at the bottom and tick the "spam and scams" button. This will allow you to list multiple videos as offenders instead of reporting them individually. Youtube's policy states that video spam constitutes:
Massively uploading content that you scraped from other creators.
Auto-generated content that computers post without regard for quality or viewer experience.
If you recognize one of your fics among the stolen, say so in the additional comments box, and perhaps call out the channel directly in the video's comments. If you recognize someone else's fic, please let the original author know so they can report the channel as well. Many have been confronted for stealing previously and refuse to admit wrongdoing.
Most of what I've found has been My Hero Academia fics since that's my fandom and those are the ones I can recognize as stolen, but there are many other channels that steal from other fandoms, so I invite anyone and everyone to reblog this with their own findings.
The reality is that this extremely low-effort content and new youtube channels are both very easy to make, so most likely they'll start new channels once the ones on this list are run through. But hopefully, if we all work together and keep whacking these moles, perhaps we can instill that same defeatism they caused so many creators who didn't deserve it, and eventually they'll give up.
My sincerest thanks to everyone who helped bring additional channels to my attention. A special thanks to ao3 user InArduisFidelis who brought the initial attention to the issue, and @owlf45 whose work was stolen.
Links under the cut.
YurikoFanfics - Not only stole content, but acted in comments as though they were the one writing these stories.
https://href.li/?https://www.youtube.com/@YurikoFanfics
What-IF-Anime - Has the exact same "disclaimer" about not being the original author as the one above. Either they're the same person or the thieves are stealing from each other.
https://href.li/?https://www.youtube.com/@What-IF-Anime
quirkywhatif7 - Either an alt of the above, or all these people are talking to one another because this one made a community post identical to a comment the one above made in response to being called out (the above screenshots).
https://www.youtube.com/@quirkywhatif7/about
DekuFanfic - It's the same fucking guy again.
https://www.youtube.com/@DekuFanfic/about
InfiniteParadoxfanfics - Nothing notable, same deal as the others.
https://href.li/?https://www.youtube.com/@InfiniteParadoxfanfics/about
WhatIfAnimeChannel - Admits in their community posts that other people write the fics they post but still doesn't give credit. Migrated to a new channel after issues with youtube, likely being flagged previously.
https://href.li/?https://www.youtube.com/@WhatIfAnimeChannel/about
WhatIfAnimeAll - Alt of above.
https://href.li/?https://www.youtube.com/@WhatIfAnimeAll
FWNWorld - Makes sure to tell you that the videogame footage is theirs, but can't bother to credit anyone else.
https://href.li/?https://www.youtube.com/@FWNWorld/about
WTFW - Claims to have "[A] team of talented writers, voice actors, and artists work together to create immersive fan fiction stories that are sure to captivate your imagination." Just the same test-to-speech stolen content over videogames. So straight up lying claiming that everything is theirs (and that anything they make is quality).
https://href.li/?https://www.youtube.com/@WTFW
MHA2.0Fanfics - Lots of crossover theft.
https://www.youtube.com/@MHA2.0Fanfics/about
Collerwhatiif - Pretty sure this one is the same guy as the previous 2, also has one for another fandom.
https://www.youtube.com/@Collerwhatiif/about
https://www.youtube.com/@GoJoFanfiction/videos
ko_sensei - Another that claims to have a "team" that makes the stories they steal: " passionate about creating compelling and engaging fanfiction that explores the various "what ifs" in the anime universe."
https://www.youtube.com/@ko_sensei/about
FantasticWhatIf - Multifandom stealing, uses the exact same bs disclaimer as many others.
https://www.youtube.com/@FantasticWhatIf/about
LettuceHeadFanfics - No credit, no acknowledgement of anything. Next one is an alt.
https://www.youtube.com/@LettuceHeadFanfics/about
brocollifanfics - Alt of above, once again admits to stealing with a declaration of "☆If you want to takedown any videos. You can mail us or leave a comment below the video☆"
https://www.youtube.com/@brocollifanfics/about
whatifofficial786 - Focuses on MHA/Naruto crossovers. Identical format.
https://www.youtube.com/@whatifofficial786/about
NotWhatIf - I've lost track of who's an alt of who but yet another identical format, descriptions, and bullshit claims of "enhancing the viewer experience" by putting a robot voice over bootleg fortnite footage.
https://www.youtube.com/@NotWhatIf/about
weebxds - Same again.
https://www.youtube.com/@weebxds/about
ItachiFanfics - Naruto channel, we can at least confirm that this one is run by a human given the rare different descriptions and a real voice at the beginning of videos before the robot comes back.
https://www.youtube.com/@ItachiFanfics/about
WhatIfDN - As if mockingly, a bunch of videos have a "credit" section in their descriptions that is of course blank.
https://www.youtube.com/@WhatIfDN/about
SpiceandBooks and spiceandfiction - Apparently Youtube itself has started picking up on the bullshit, because this multifandom channel is being dinged as ai spam so they started a new one.
https://www.youtube.com/@SpiceandBooks/about
https://www.youtube.com/@spiceandfiction/about
theoriginalastra - Doesn't even bother with disclaimers, the following are multiple alts/potential alts for different fandoms.
https://www.youtube.com/@theoriginalastra/about
SillySenpai12 - Highschool DXD alt.
https://www.youtube.com/@SillySenpai12/about
RosieRealms - Naruto alt.
https://www.youtube.com/@RosieRealms/about
DekuWhatIfs - Potentially another astra alt but not sure, doesn't matter because all these channels do the same thing anyway.
AnimeStark688 - No credits or disclaimers.
https://www.youtube.com/@AnimeStark688/about
Please take the time to report these channels, spread this post around, and reblog with any additional offending channels you find.