I find it quite common (and confusing) for certain news types like policy, eg “party A reverses the disapproval to oppose the once-unacceptable ban”
I find it quite common (and confusing) for certain news types like policy, eg “party A reverses the disapproval to oppose the once-unacceptable ban”
I mean, this article is from 2022, which claims to use seaborn but not really. It really shows their effort, even before the whole AI hype …
https://www.geeksforgeeks.org/how-to-create-a-stacked-bar-plot-in-seaborn/
I’m also curious. A quick search came up with these. Not sure which one is most reliable/updated
Many things are called “AI models” nowadays (unfortunately due to the hype). I wouldn’t dismiss the tools and methodology yet.
That said, the article (or the researchers) did a disservice to the analysis by not including a link to the report (and code) that outlines the methodology and how the distribution of similarities look. I couldn’t find a link in the article and a quick search didn’t turn up anything.
you should try to ask the same question using xAI / Grok if possible. May also ask ChatGPT about Altman as well
welp, guess you’re right. It’s not common but not just a few someone’s either.
tell me more about the “almost” part …
Based on this reddit comment, that website is not affiliated with the magic-wormhole
CLI tool
I believe experiments like these should move slower and with more scrutiny. As in more animal testing before moving on to humans, esp. due to the controversies surrounding Neuralink’s last animal experiments.
I think porn generation (image, audio and video) will eventually be very realistic and very easy to make with only a few clicks and some well crafted prompts. Things would just be a whole other level that what Photoshop used to be.
re: your last point, AFAIK, the TLDR bot is also not AI or LLM; it uses more classical NLP methods for summarization.
If you suspect that it’s been modified, try going to places like the internet archive or archivetoday to check. The claims you’ve made seem big, so back them up with sources.
Is there a database tracking companies that start out with good intentions and then eventually gets bought out or sells out their initial values? I’m wondering what the deciding factors are, and how long it takes for them to turn.
re 1: out of curiosity, do you encounter dnsleaks when using wireguard?
re 4: you can also check out https://starship.rs/, which helps configure shell prompt very intuitively with a toml file.
Hold up, are you sure you can’t view Discussions or Wiki? Which sites can you not view them?
I’m fine viewing them for public repos that I usually visit.
Asking to make sure that Github is not slowly rolling out this lockdown.
Reminds me of this article https://www.alexmurrell.co.uk/articles/the-age-of-average where the author pulls in different examples of designs and aesthetics converging to some “average”.
I’m feeling conflicted with these trends, on one hand it seems like things are becoming more accessible, while on another, feels like a loss.
This especially may be relevant with generative AI - at least for the very few generative arts I look at, at some point they start to feel the same, impersonal.
care to explain the reference?
They don’t seem to allow account deletions. Does it mean that this could include accounts that they still keep but people don’t use their services anymore?
You can also just post the 4-5 data items without claiming that this is low or high credibility or bias. Then let the people make the decision. Like this maybe:
“Based on source X, this source media bias is:
Methodology of X is at: “