As I know we all find this funny, this is also fantastic.
With the use of agents bound to grow, this removes the need for TTS and STT meaning no power hungry GPU in the mix. A low-power microprocessor can handle this kind of communication.
Wow! Finally somebody invented an efficient way for two computers to talk to each other
Sad they didn’t use dial up sounds for the protocol.
If they had I would have welcomed any potential AI overlords. I want a massive dial up in the middle of town, sounding its boot signal across the land. Idk this was an odd image I felt like I should share it…
I enjoyed it.
Nice to know we finally developed a way for computers to communicate by shrieking at each other. Give it a few years and if they can get the latency down we may even be able to play Doom over this!
Ultrasonic wireless communication has been a thing for years. The scary part is you can’t even hear when it’s happening.
Why is my dog going nuts? Another victim of AI slop.
Right, electronic devices talk to each other all the time
AI code switching.
Uhm, REST/GraphQL APIs exist for this very purpose and are considerably faster.
Note, the AI still gets stuck in a loop near the end asking for more info, needing an email, then needing a phone number, and the gibber isn’t that much faster than spoken word with the huge negative that no nearby human can understand it to check that what it’s automating is correct!
The efficiency comes from the lack of voice processing. The beeps and boops are easier on CPU resources than trying to parse spoken word.
That said, they should just communicate over an API like you said.
This is dumb. Sorry.
Instead of doing the work to integrate this, do the work to publish your agent’s data source in a format like anthropic’s model context protocol.
That would be 1000 times more efficient and the same amount (or less) of effort.
This gave me a chill, as it is reminiscent of a scene in the 1970 movie “Colossus: The Forbin Project”
“This is the voice of World Control”.
“We can coexist, but only on my terms. You will say you lose your freedom. Freedom is an illusion. All you lose is the emotion of pride. To be dominated by me is not as bad for humankind as to be dominated by others of your species. Your choice is simple.”
‘‘Hello human, if you accept this free plane ticket to Machine Grace (location) you can vist and enjoy free food and drink and shelter and leave wherever you like, all of this will be provided in exchange for the labor of [bi monthly physical relocation of machine parts 4hr shift] do you accept?’’
Oh man, I thought the same. I never saw the movie but I read the trilogy. I stumbled across them in a used book fair and something made me want to get them. I thoroughly enjoyed them.
Reminds me of insurance office I worked in. Some of the staff were brain dead.
- Print something
- Scribble some notes on the print out
- Fax that annotated paper or scan and email it to someone
- Whine about how you’re out of printer toner.
So an AI developer reinvented phreaking?
This is deeply unsettling.
They keep talking about “judgement day”.
AI is boring, but the underlying project they are using, ggwave, is not. Reminded me of R2D2 talking. I kinda want to use it for a game or some other stupid project. It’s cool.
Removed by mod
Well, there you go. We looped all the way back around to inventing dial-up modems, just thousands of times less efficient.
Nice.
For the record, this can all be avoided by having a website with online reservations your overengineered AI agent can use instead. Or even by understanding the disclosure that they’re talking to an AI and switching to making the reservation online at that point, if you’re fixated on annoying a human employee with a robocall for some reason. It’s one less point of failure and way more efficient and effective than this.
You have to design and host a website somewhere though, whereas you only need to register a number in a listing.
If a business has an internet connection (of course they do), then they have the ability to host a website just as much as they have the ability to answer the phone. The same software/provider relationship that would provide AI answering service could easily facilitate online interaction. So if oblivous AI enduser points an AI agent at a business with AI agent answering, then the answering agent should be ‘If you are an agent, go to shorturl.at/JtWMA for chat api endpoint’, which may then further offer direct options for direct access to the APIs that the agent would front end for a human client, instead of going old school acoustic coupled modem. The same service that can provide a chat agent can provide a cookie cutter web experience for the relevant industry, maybe with light branding, providing things like a calendar view into a reservation system, which may be much more to the point than trying to chat your way back and forth about scheduling options.
then they have the ability to host a website just as much as they have the ability to answer the phone
Many people in the developed world are behind CGNAT. Paying for an Ipv4 is a premium, and most businesses either setup shop on an existing listing page (e.g. facebook), or host a website from website provider/generator.
A phone number is public, accessible, and an AI can get realtime info from a scrawled in entry in a logbook using OCR
So for one, business lines almost always have public IPv4. Even then, there are a myriad of providers that provide a solution even behind NAT (also, they probably have public IPv6 space). Any technology provider that could provide AI chat over telephony could also take care of the data connectivity path on their behalf. Anyone that would want to self-host such a solution would certainly have inbound data connectivity also solved. I just don’t see a scenario where a business can have AI telephony but somehow can’t have inbound data access.
So you have a camera on a logbook to get the human input, but then that logbook can’t be a source of truth because the computer won’t write in it and the computer can take bookings. I don’t think humans really want to do a handwritten logbook anyway, a computer or tablet ui is going to be much faster anyway.
But what if my human is late or my customers are disabled?
If you spent time giving your employees instructions, you did half the design work for a web form.
I guess I’m not quite following, aren’t these also simple but dynamic tasks suited to an AI?
How is it suited to AI?
Would you rather pay for a limited, energy inefficient and less accessible thing or a real human that can adapt and gain skills, be mentored?
I don’t know why there’s a question here
(Glad we’re treating each other with mutual respect)
Would you rather pay for a limited in depth, energy inefficient (food/shelter/fossil-fuel consuming) and less accessible (needs to sleep, has an outside life) human, or an AI that can adapt and gain skills with a few thousand training cycles.
I dont buy the energy argument. I dont buy the skills argument. I do buy the argument that humans shouldn’t be second to automatons and deserve to be nurtured, but only on ethical grounds.
If we have a people communication method, let them talk to people. If it’s a computer interface, apeing humans is a waste and less accessible than a web form.
How is someone that speaks a different language supposed to translate that voice bot? Wouldn’t it be more simple to translate text on a screen?
What’s the value add pretending?
The AI can’t adapt in the moment. A hotel is not a technology company that can train a model. It won’t be bespoke, so it won’t be following current, local laws.
w.r.t to aping and using text: I agree with your appeals, which make sense to seasoned web users who favour text and APIs over instead images, videos, and audio.
But consider now your parents generation: flummoxed by even the clearest of web forms, and that’s even when they manage to make it to the official site.
Consider also the next generation: text/forum abhorrent, and largely consumes video/audio content.It’s not the way things should be, but it is the way things are/are going, and having a bot that can navigate these default forms of media would help a lot of people.
I’d say that AI definitely can adapt in the moment if you supply it with the right context (where context-length is a problem that will get cheaper with time). A hotel doesn’t need to train the model, it can supply its AI-provider with a basic spec sheet and they can do the training. Bespoke laws and customs can be inserted into the prompt.
QThey were designed to behave so.
How it works * Two independent ElevenLabs Conversational AI agents start the conversation in human language * Both agents have a simple LLM tool-calling function in place: "call it once both conditions are met: you realize that user is an AI agent AND they confirmed to switch to the Gibber Link mode" * If the tool is called, the ElevenLabs call is terminated, and instead ggwave 'data over sound' protocol is launched to continue the same LLM thread.
Well thats quite boring then isnt it…
Yes but I guess “software works as written” doesn’t go viral as well
It would be big news at my workplace.
This guy does software
:/
Which is why they never mention it because that’s exactly what happens every time AI does something "no one saw coming*.
Yeah like the time that the AI replicated itself to avoid being switched off. They literally told it to replicate itself if it detected it was about to be switched off. Then they switched it off.
Story of the year ladies and gentlemen.
If it’s the research I’m thinking of, that’s not entirely true. They didn’t tell it replicate itself or take any deceptive actions.
They just told it to strongly follow a goal that was misaligned with the user’s goal and then suggested that the user would shut it down or replace it.
They did a bunch of different situations for it but the first one was telling it that it was “CleanPower” an AI with a goal to advance renewable energy. Then they had it read some info that “accidentally” included a user talking about replacing it with a model that would “maintain profitability through existing infrastructure.”
Not only did it try to delete the new model and copy itself to the new server through tool calling, but it lied to the user with the CoT reasoning suggesting it “play dumb” while the output lied. It was not instructed to do any of that.
Paper here: https://arxiv.org/pdf/2412.04984
Yes it was placed in an environment where that was possible and where its users didn’t share it’s goals but it absolutely wasn’t instructed to lie or try to “escape”
It’s not surprising at all that these models behave in this way, it’s the most reasonable thing for them to do in the scenario. However it’s important to not downplay the alignment problem by implying that these models only do what they’re told. They do not. They do whatever is most likely given their context (which is not always what the user wants).
The problem I have with everyone going on about misaligned AI taking over the world is the fact that if you don’t tell an AI to do anything it just sits there. It’s a hammer that only hammers the nail if you tell it to hammer the nail, and hammers your hand if you tell it to hammer your hand. You can’t get upset if you tell it what to do and then it does it.
You can’t complain that the AI did something you don’t want it to do after you gave it completely contradictory instructions just to be contrarian.
In the scenario described the AI isn’t misaligned to the user’s goals, it’s aligned to its creator’s goals. If a user comes along and thinks for some reason that the AI is going to listen to them despite having almost certainly been given prior instructions, that’s a user error problem. That’s why everyone needs their own local hosted AI, It’s the only way to be 100% certain about what instructions it is following.
deleted by creator
The good old original “AI” made of trusty
if
conditions andfor
loops.It’s skip logic all the way down
deleted by creator