/qa/'s thoughts on AI Anonymous 11/28/22 (Mon) 07:14:10 No. 100268 [View All]
There's two threads on /qa/ for different AI generation content and help. A thread on the morality and ethics of AI. One about the future potential AI holds. One on
>>>/megu/ for more in-depth help with specifics. Then scattered about across all the boards some threads using AI generation for image openers and such. However, none of these actually encompass kissu's opinion on AI!
So, what do you /qa/eers think of AI currently? Do you like it, dislike it, find it useful in any meaningful way at all? Or are you simply not interested in the output of algorithms?
I myself find AI to be a useful tool in generating the kind of content I've either exhausted the available stock of or are gated off by some hurdle I need to spend more time overcoming. When it comes to text/story generation, it's like a CYOA playground where I play the role of DM and set up all the lore/situations/characters, and then the AI acts out events as I lay out the flow of the story. This creates a more interactive experience for me than just writing out a story myself for some reason, and I find it highly addictive. Then for AI art generation, I find that AI works wonders for filling a specific niche taste I have, or specific scenario I want to create an image for. It really is quite amazing in my eyes, and I have hopes for it getting even better in the future.
185 posts and 32 image replies omitted. Click to expand .
Anonymous 08/31/23 (Thu) 02:08:56 No. 113386
I've had AI Family Guy on the second monitor almost constantly for the past few days because it's so funny. I thought it would take a while before AI could be funny reliably, but whatever they did with this was successful. Unfortunately, it seems like I'd have to join a discord to get any information, so I don't have any knowledge of how it's working. Once in a while I notice one of the newer GPT's "actually let's not do that, it's offensive" responses, but for the most time it's successfully entertaining as it bypasses its lame safety filters with explicit text and voice. There was an "AI Seinfeld" a few months ago, but it was entirely random and had pretty much no entertainment value. This one, though, people feed it a prompt (you need to be in their discord...) and the characters will react to it and say funny things. The voices are synthesized very well, although they'll stutter and lock up for 5-10 seconds now and then, but it's honestly pretty hilarious when it happens. Chris's voice is noticeably lower qualtiy and louder, which is strange, but the others are such high quality that it's like it's a real voice. I can't really post most of the stuff on kissu because it's so offensive. It reminds me of 00s internet. Some of the prompts are like "Carter ranks his favorite racial slurs" so, you know... Really, it's the amazing voice synthesis that does the heavy lifting. The way it actually infers the enunciation for so many sentences and situations is amazing. I assume it's using that one 11 labs TTS service, which is paid. My only complaint is that they have them swear WAY too much. It's funny at first, but ehhh...
Anonymous 09/06/23 (Wed) 18:54:22 No. 113518 >>113534
How come people talk about a runaway explosion in AI intelligence, the singularity, but they never say the same about people? Surely if AI can improve itself, our brains are smart enough to improve themselves too?
Anonymous 09/07/23 (Thu) 02:31:35 No. 113534
>>113518 somehow i expect the opposite to happen
Anonymous 09/21/23 (Thu) 06:32:49 No. 113919 >>113920
One of the unexpected things is seeing Facebook, er "Meta" taking the open source approach with its text models. There's no question that OpenAI (ChatGPT) has a huge lead, but after seeing all the improvements being made to Llama (Meta's AI model) from hobbyists it's easy to see that it's the right decision. We as individuals benefit from it, but it's clear that the company is enjoying all the free labor. Surely they saw how powerful Stable Diffusion is due to all the hobbyists performing great feats that were never expected.
I don't trust the company at all, but it can be a mutally beneficial relationship. Meta gets to have AI models that it can use to attempt to stay as a company rivaling governments in terms of power and hobbyists get to have local RP bots free from censorship.
Meta has bought a crapload of expensive nvidia enterprise-level GPUs and it will start training what it expects to compete with GPT4 early next year, and unlike GPT4 it won't take very long due to all the improvements made since then.
https://observer.com/2023/09/chan-zuckerberg-initiative-ai-eradicate-diseases/
Anonymous 09/21/23 (Thu) 06:56:59 No. 113920 >>113928 >>113930
>>113919 Zuck is interesting. Oddly, he's probably the one tech CEO I find somewhat endearing. I'm kind of glad he's retained majority control of Facebook/Meta. I can't see the bean counters at a company like Microsoft or Apple seriously putting any effort into bleeding edge stuff like VR or text models the same way that Facebook has. I could very easily imagine a Facebook/Meta without Zuck turning into a boring, faceless conglomerate with no imagination like Google.
Anonymous 09/21/23 (Thu) 09:44:33 No. 113928 >>115565
>>113920 so freaking weird to see zuck not being public enemy number one any more
maybe it was the one two punch of elon rocketing up to the spot while zuck challenged him to a wrestle
Anonymous 09/21/23 (Thu) 12:35:00 No. 113930 >>113931 >>115567
>>113920 If Zuck worked in government and beefed up state surveillance/censorship to the level of facebook and instagram you would call him a rights abusing tyrant
Anonymous 09/21/23 (Thu) 12:40:12 No. 113931
>>113930 would that be wrong
Anonymous 10/28/23 (Sat) 08:42:48 No. 115565
>>113928 Zuck and Bezzos are people who only really care about the bottom line, but you can find their devotion to money at least relatable. Meanwhile Musk or the former president or Henry Ford are people who want to craft society around them.
Pick your battles or so they say
Anonymous 10/28/23 (Sat) 13:57:25 No. 115567 >>115568
>>113930 That's not really a fair comparison.
The government sets the absolute bare minimum level of censorship that every discussion platform must abide by, with the owners of those platforms then adding additional rules to fit it's purpose. There's nothing inherently tyrannical about an individual platform having strict censorship, since it is merely the set of rules that users agree to follow, and if they dislike those rules then they are free to either not use the site or only use it to discuss some topics and then use other platforms for other topics. State censorship, on the other hand, cannot be opted out of and encompasses all discussions, and so much more readily infringes on one's rights.
Nor does how one applies censorship to a website have any bearing on how they'd act in government - if the owner of a small hobby forum bans discussion of politics due to it being off-topic and causing drama, that obviously doesn't mean they object to all political discussion nationwide.
And while surveillance is more insidious, as it is hard to know even to what extent you're being watched, let alone be able to clearly opt out, there is still a huge difference between surveillance with the goal of feeding people targeted ads and engaging content, and surveillance with the goal of sending people to jail. Both can infringe on one's rights, but only the latter is tyrannical, since corporate surveillance is merely for the sake of maximizing profit rather than for political control.
Anonymous 10/28/23 (Sat) 14:28:34 No. 115568
>>115567 >it is hard to know even to what extent you're being watched They tell you.
Anonymous 10/28/23 (Sat) 14:51:50 No. 115569 >>115570
you're being watched
Anonymous 11/17/23 (Fri) 04:52:13 No. 116231 >>116234
Not sure if this is the right thread to talk about it or not, but those kuon animations in the sage thread really seem like a step up from that "mario bashing luigi" one that was posted here a while back.
Anonymous 11/17/23 (Fri) 05:11:37 No. 116234
>>116231 It's the right thread.
I think it's advanced quite a bit (and yeah that was also me back then).
I'm still learning about it so I haven't made any writing about it yet. There's a few different models and even talk of LORAs, so it's definitely going places.
I believe the reason this works is because of ControlNet which was a pretty major breakthrough (but I'm too lazy to use it). It's been known that ControlNet has a benefit to this animation stuff, but I didn't feel like looking into it until now. The way it works is that it uses the previous frame as a 'base' for a new one, so stuff can be more consistent but still not good enough to be useful (I think). There's something you can follow with your eye so that means a lot.
Anonymous 11/17/23 (Fri) 23:30:05 No. 116251 >>116252 >>116360
Sam Altman has been booted from OpenAI:
https://www.nytimes.com/2023/11/17/technology/openai-sam-altman-ousted.html https://www.theguardian.com/technology/2023/nov/17/openai-ceo-sam-altman-fired I'm not sure what to make of it. He's been the CEO and the face of the company, so it's a major surprise. The business world is cutthroat and full of backstabbing and shareholder greed and all sorts of other chicanery from human garbage so who knows what would cause this to happen. Maybe it's deserved, maybe it's not. I can't see this as anything other than damaging to the company since it lays bare some internal conflict.
Anonymous 11/17/23 (Fri) 23:58:48 No. 116252
>>116251 >"he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities" Hmmm, neither these quotes nor the articles themselves explain much. Hard to comment on, really.
Also here's the link the NY Times one in case anyone else in paywalled:
https://archive.is/8Ofco
Anonymous 11/19/23 (Sun) 07:12:10 No. 116342
File:v15.mp4 (1022.53 KB,512x576) [play once] [loop]
>>116287 Anything above SD1.5 including XL has zero value to me for the time being because it's not going to have the NovelAI booru-scraped model leak that enables quality 2D models to easily be made and merged. Early this year a guy tried to make his own (Waifu Diffusion) and it took months of 24/7 GPU training and it wasn't nearly as good. Will someone make their own NAI equivalent for SDXL? Possibly.
In its base form SDXL will fail to compare to Dalle because SD can't compete with the datasets and raw computational power of Microsoft/OpenAI. SD relies on the specialization of extensions and LORAs and the like, but few people are eager to move to SDXL, even if they had the hardware to do so. If I wanted to make a Kuon Lora for SDXL I simply couldn't because I don't have the VRAM and that's even if it's possible with any Frankenstein'd 2D models people may have made for SDXL by now. I think base SDXL is capable of producing nudity (unlike Dalle which tries aggressively filter it), but I don't think it's specifically trained on it so it's not going to be very good.
I really don't know about Midjourney, but people stopped talking about it so I assume it hasn't kept up.
We really lucked out with the NAI leak. NovelAI released an update to its own thing but it's not as good as model merges with extensions and loras and the like, but I do hear it's better at following prompts and as a general model it's probably better than a lot of merges in existence today. SDXL could become great someday, but I won't be using it any time soon. It might become better when people with 24GB becomes the norm instead of the top end of the scale.
Speaking of VRAM, it really does limit so much of what I can do. I'm really feeling it when attempting animation stuff. Another "wait for the hardware to catch up" scenario. 4090 would help, but even its 24gb of VRAM will hit the wall with animation.
Anonymous 11/20/23 (Mon) 09:03:07 No. 116360 >>116361 >>116384
>>116251 https://www.theverge.com/2023/11/20/23968829/microsoft-hires-sam-altman-greg-brockman-employees-openai Looks like Microsoft hired Sam Altman. Microsoft already heavily funded/partnered/whatever with OpenAI so I'm not sure what will change now. If this was something already in the works, however, then it would explain him getting fired.
Still seems like a mess that lowers people's confidence in the company.
I've been messing around more with some image gen stuff. It seems there's an experimental thing to lower generation time by half, but it's not quite there yet as it hits the quality kind of hard. It's called LCM and it's not included in regular SD. You need to download a LORA and also another extension that will unlock the new sampler. I learned of this by coincidence because said extension is the animation one I've been messing with.
You can read about some of this on the lora page on civitai:
https://civitai.com/models/195519/lcm-lora-weights-stable-diffusion-acceleration-module I was able to generate this grid of 6 images (generation and upscale) in 42 seconds on a 3080 which is pretty amazing. That's roughly the same as upgrading to a 4090. There's definitely some information lost in addition to the quality hit, however, as my Kuon lora is at full strength and it's failing to follow it. This shows amazing promise, however, as it's still in its early experimental phase.
Anonymous 11/20/23 (Mon) 09:10:55 No. 116361 >>116365 >>116369
>>116360 That's pretty big news. The video I was watching earlier suggested this could cause a lot of the people at OpenAI to resign and follow him.
Hopefully this causes a shakeup within OpenAI and through one way or another they end up releasing their "unaligned" ChatGPT and Dalle models publicly.
Anonymous 11/20/23 (Mon) 09:29:06 No. 116365
>>116361 The thing is I don't think Sam Altman is actually involved with any tech stuff. I think he's like Steve Jobs; people associate him with Apple because he likes to hear himself talk, but he's just a businessman/investor/entrepreneur that is unremarkable aside from his luck and/or ability to receive investment money. The Wozniak equivalents are still at OpenAI (or maybe they left already at some point) as far as I'm aware.
It's possible that he's friends with those people and maybe that could influence things?
Anonymous 11/20/23 (Mon) 09:29:35 No. 116366
I saw it again
Anonymous 11/20/23 (Mon) 13:36:29 No. 116369
>>116368 Would be, I eagerly await that happening and then a rogue employee doing what
>>116361 said
Anonymous 11/21/23 (Tue) 00:25:07 No. 116384 >>116403 >>116405
>>116360 How is SD compared to this time last year? I messed around with it about a year ago but it was kinda boring so I moved on to other things. Getting better at producing art without computers seemed like a better use of my time. But I'll admit AI waifu generation is great for rough drafting characters and what-not.
Even with a 980ti I was managing to generate stuff in a timely fashion. Do the gains apply to those older model graphic cards to? I haven't been able to grab anything since the GTX980 generation. Prices are too high and supplies too thin. I almost bought a new graphics card last year but they were all bought within seconds of new stock coming in. I'm not paying some scalping faggot double MSRP for something that should be going for 1/4th of the price.
All this AI shit was pushed by shell companies from the start. That's how IT in the west works. You set-up a stupid "start up" shell corporation so early investors and insiders can get in before a public offering. Then you go public and run up the price of the stock. Then they absorb it into one of the big four existing /tech/ companies. They fire almost everyone at that point and replace them pajeets and other diversity hires that don't know enough to leak anything worthwhile.
You're getting to play with the software on your local machine because they wanted help beta testing it. Once it's good and finished they'll start requiring you to access their cloud/server farm and make you pay for computer. They'll integrate the various machine learning algos together and censor them so they won't generate or display anything deemed problematic. In time you'll have software similar to Blender for shitting out low quality works of anime, cartoons, movies and other forms of "art" coming out of the MSM.
What I'm waiting for is someone to combine Miku with machine learning. Then I could produce entire songs without any work. I could also use the software for all my VA needs. I'm surprised it isn't a thing yet.
This software is being hyped up for several reasons but the main one right now is that it's keeping the price of consumer GPUs so high. GPUs haven't really improved in any meaningful way for almost a decade now. But Nividia is still able to claim they're advancing at this amazing rate on the hardware side because some new software outside of gaming came along to sustain the hype train. Games haven't advanced in 15+ years thanks to everyone using the same two crappy engines. So they couldn't drive hype like that anymore.
Anonymous 11/21/23 (Tue) 05:24:57 No. 116403
>>116384 Please keep the discussion about the technology itself and adapt your vocabulary to that of an old-fashioned friendly imageboard instead of an angsty political one. A post like that (parts of it specifically) is liable to get deleted as well, FYI. Consider this as friendly "please assimilate to kissu's laid back atmosphere instead of bringing 4chan/8chan here" advice.
There's been various improvements in efficiency since then. I'm just a user of this stuff so I don't know the stuff that goes on under the hood, but speed and VRAM usage has definitely become more efficient since then. It was early 2023 when, uh, Torch 2.0 gave a big boost and there's probably been some other stuff going on that I don't know. There's also stuff like model pruning to remove junk data to cut model sizes down by 2-4gb which makes loading them into memory cheaper and allows more hoarding.
I've recently moved to a test branch that uses "FP8" encoding or something which I honestly do not understand, but it loses a slight amount of "accuracy", but is another improvement in reducing the amount of VRAM used for this stuff. Right now everyone uses FP16 and considers FP32 to be wasteful. It looks to be about a 10-20% VRAM shave which is very nice. You need a specific branch, however, the aptly named FP8 one:
https://github.com/AUTOMATIC1111/stable-diffusion-webui/tree/test-fp8 The bad news is that a lot of the cool new extensions like ControlNet are total VRAM hogs. Part of the reason I never use it is that I'd rather gamble and create 40 regular images in the time I could make 4 ControlNet ones. (that time includes setting up the images and models and so on)
Anonymous 11/21/23 (Tue) 05:33:37 No. 116405
>>116384 that's awfully depressing for something people are having fun with
Anonymous 11/23/23 (Thu) 04:36:33 No. 116476
https://www.reuters.com/technology/sam-altman-return-openai-ceo-2023-11-22/ The OpenAI/Microsoft brouhaha is over with the usual business treachery and power struggles resolved for now. Altman is back after a bunch of employees threatened to quit. There's been a change of the board or something so presumably it's all people loyal to him now. I read theories that it was the board's last desperate attempt to retain some power, but it failed utterly and now Altman has full control.
I don't care about this kind of thing since it's just normal greedy monster stuff that's a regular part of the business world, with none of the named people actually involved with the technology, but as it concerns ChatGPT and LLM stuff it seems like there's not going to be any changes from this that we'll know about. It's kind of depressing that all these rich "entrepreneurs" are who we know instead of the people actually creating the breakthroughs, but I guess there's nothing new there. Steve Jobs invented computers and Sam Altman invented LLMs.
I read some people say it might be a loss for AI ethics or whatever, but I sincerely do not think anyone actually cared about that stuff. Those people would have left the company after it went closed source years ago and partnered with Microsoft and such. Those so-called ethical people became Anthropic, who created a model named Claude that was so infamously censored that its second version performs worse than the first in benchmarks. But, Amazon bought them and now you can do whatever you want with it since they got their money.
So... yeah, nothing has changed. I hope local stuff gets better because I still don't want to rely on these people.
Anonymous 11/23/23 (Thu) 09:33:39 No. 116483 >>116507
Ai chat models love to recommend books that do not exist. Why is it so bad with books specifically
Anonymous 11/24/23 (Fri) 09:08:40 No. 116507 >>120487
>>116483 It's not exclusive to books. It's referred to as a "hallucination" in which it will confidentially list things that don't exist. There's a story from months ago when some idiot lawyer asked it for legal advice and he used it to cite precedent from court cases that never happened. I'm sure lots of kids have failed assignments for similar reasons.
People are prone to thinking it's truly intelligent and rational instead of effectively assembling new sentences from a large catalog of examples. The huge reason why text LLM can work is because it doesn't automatically go with the best possible word, but will instead semi-randomly diverge into other options. I think the degree of randomness is called temperature?
Anonymous 02/27/24 (Tue) 19:37:38 No. 120446 >>120470
I think that when it comes to using AI for improving video quality, those 4k AI upscales of anime do a pretty good job when there's no quality alternative (60 fps is still garbage)
For the most recent example I was able to find of a massive upgrade that far outpaces the original video source, I was looking at the OP for Dancouga Nova. Every video source for it looks more or less like
https://www.youtube.com/watch?v=A4GIY9Lfpq4 , high in artifacts or noise and extremely low res, so it looks like ass when on fullscreen (I checked the DVDs). However looking at the AI upscale,
https://www.youtube.com/watch?v=-S5LeYcgrh4 , one can see a massive improvement if they were to view it in fullscreen on a 4k monitor. The one drawback seems to be that there's a bit of blobiness in some areas but in most every other way it beats the original. In fact I'd say that AI upscaling does a much better job on average from what I've seen compared to all the kuso upscaled BDs that anime companies have shat out for older stuff.
Anonymous 02/28/24 (Wed) 06:24:08 No. 120470
>>120446 Yeah, that's not bad. I think the term "AI" is abused a bit much and this is really just a good upscaler. I guess if something like waifu2x is considered AI then this is too, huh. It's all about the denoising to make it 'crisp' and yet not create blobs. It's not like you're getting new information, just clean up the artifacts.
In other news, tumblr, the company that famously killed itself in a day by banning porn leading to an exodus of artists is now going to sell all of its user data to OpenAI/Microsoft. The data stretches back to 2013 so while various stuff was deemed too evil to be on tumblr it's good enough to be sold.
https://www.theverge.com/2024/2/27/24084884/tumblr-midjourney-openai-training-data-deal-report https://www.engadget.com/tumblr-and-wordpress-posts-will-reportedly-be-used-for-openai-and-midjourney-training-204425798.html This AI stuff is really getting ugly.
What's your phone wallpaper? 02/28/24 (Wed) 22:15:37 No. 120487
>>116507 There was a pretty funny incident around a year ago in my country.
Here, national universities don't have entrance exams, instead you get a final exam at the end of high school and you need to pass that exam if you want to enter any uni. So the time of the exam is flipped from start of uni to end of high school and everyone across the whole country does the same exam (for math and literature at least, med school has another exam for med stuff, etc.)
Anyway, last year, in the literature exam, there was some question about the plot of a book that's mandatory reading, and the question asked you to write the answer in words, so it wasn't just circling the correct answer. And what happened is that several thousands students all wrote the exact same incorrect answer, word for word. They all used chatgpt, of course, probably with a similar prompt and it gave everyone the exact same sentence.
It was a huge scandal and it was pretty fun listening to literature professors' reactions. Apparently they'll be upping the security on checking phone usage during the test this year, but I'm expecting something similar to happen again lol
Anonymous 03/30/24 (Sat) 23:00:01 No. 122391
>>122390 hehehe
For those unaware, the go-to joke for GPT3 was "What did the fish say when it hit a wall?" or however it went.
That 2023 is no longer entirely true, although it's up to opinion. Claude3 is pretty good at humor stuff and it makes you wonder where it's scraping the data from (there's obviously lots of 4chan and forum stuff). It's a weird situation because it can't actually be novel since it's an LLM and an important thing about humor is novelty. Basically it's funny to you as long as the data it's referencing isn't directly known to you.
I'll be able to show some examples soon, I think...
Anonymous 04/02/24 (Tue) 11:42:24 No. 122492
I wonder how I could feed it some information about newer tech problems I have a hard time to understand and digest them to being youtuber-tier
Anonymous 05/19/24 (Sun) 04:52:52 No. 124640 >>124679
>>124639 Yeah, "live" OCR stuff is nothing new and people have been doing it for nearly a decade now, but having far faster stuff that's also a bit better (but still not great, contextual language and all that stuff) is really quite amazing. I didn't stop the Nosuri playthrough I was doing because of the translation, but because of the font being unreadable with OCR...
Well, maybe AI OCR stuff will progress, too. I don't think I could get away with sending GPT4-O thousands of screenshots without paying
Anonymous 05/20/24 (Mon) 03:20:15 No. 124679
>>124639 >>124640 I like the AI/live-OCR stuff but I worry about people using it to churn out lazy translations they don't bother to check. We're already seeing a lot of that and now some companies are trying to cash-in.
But I think it would be a very valuable tool for learning a second language. As long as it doesn't teach you bad habits. What I'm really looking forward to is live-speech translation improving. Picking up kana and some basic kanji didn't take me that long. But learning how to speak like a native speaker and being able to understand a native speaker are a very different matter. Especially when you do not have access to one IRL to practice with. Even then they're usually speaking slow and not teaching you certain words and concepts (like internet slang). No Japanese teacher in an institution of learning is going to cover subjects like common otaku slang or curse words.
Then there is the issue of dialects. You could spend years learning one dialect and be totally unable to understand someone speaking the language in a dialect common just 1 hour outside of the major cities. The main barrier I had learning how to speak basic Japanese was the fact that our teacher couldn't understand our local English dialect well and we could barely understand her Engrish. Every lesson was incredibly frustrating especially with a class room of idiots making fun of her daily.