The Wows, The Woes & The Whoas Of Artificial Intelligence—Coolest AI Inventions In 2023
Every time I have an argument with my househelp over a corner she left unwashed despite constant monitoring, I quietly sit and wonder if a robotic vacuum cleaner could possibly do a better job. And the answer is always negative.
The make of robotic vacuums isn’t sophisticated enough to handle all that Indian households have to offer just yet. And even if they could, I’m not sure how I would feel about losing the human connection.
© Giphy
That’s how you’re going to feel when artificial intelligence wholly takes over your home, your schools and your workplaces—cool and curious but also confused.
The generative AI, as we’ve witnessed in the past decade, is here to stay. And you must fervently wait for it to be integrated into your workplace.
This period in human history will certainly go down as that point where we were quietly, dangerously conscious about the perils and possibilities of artificial intelligence.
© iStock
We can almost sense the post-apocalyptic stories taking shape into reality. We can smell the fumes of a silent war between the people handling AI machines and the individuals losing jobs over it. But for now, we’re rolling with it.
The knowledge of artificial intelligence is spreading like wildfire and so are the AI-powered inventions. The pace is so high that almost daily we’re finding new ways to get our cybernetic psyches to automate every inch of our lives.
© Gifer
The results might be that we start devoting all our time to health, family and friends and live forever; we might get even bigger as we did since the invention of television; or we might actually become warriors in the fight between humans and artificial intelligence.
Whatever the future might look like ten years from now, in the present, we’re safe, sitting in a building that shelters us well, with the liberty to scroll through these brilliant AI-powered inventions that may or may not doom us.
© iStock
According to an internal document reviewed by The Wall Street Journal, in May 2023, Apple restricted its employees from using ChatGPT and other AI tools such as GitHub, CoPilot, a Microsoft tool that automates coding.
“Philosophy has always been to be the best, not to be the first,” Tim Cook, Apple’s CEO said in one of his interviews. So as the popularity of ChatGPT degrades and newer models come into the market, Apple is playing catch up with the world of AI.
© iStock
Think of AppleGPT as the iPhone of the AI market then. As reported by Bloomberg’s Mark Gurman, it’s a chatbot currently being used internally by Apple’s employees. It’s designed to play a significant role in prototyping and summarising text and signalling the brand’s focus on AI in its future endeavours.
Soon, the company may expand the chatbot’s role to AppleCare support with the purpose of enhancing customer assistance.
A public release of this chatbot is not foreseen in the near future as yet due to the known limitations and challenges associated with the respective technology.
Apple intends to approach AI implementation more cautiously compared to its industry peers like Google and Microsoft, who have actively integrated generative AI into their products.
According to Bloomberg, a “significant AI-related announcement” is expected from the company next year. It could be the public launch of AppleGPT in 2024, marking Apple’s entry into the chatbot industry.
© Anthropic
Claude 2.0 has entered the chat and ChatGPT now has some serious competition, at least in the US and the UK where Claude is currently available. Anthropic released Claude 2.0 recently and it’s completely free to use.
It’s impressive because it's trained on a lot more recent data than ChatGPT. A lot of it is from early 2023. It can analyse 1,00,000 tokens (75,000 words). And ChatGPT is limited to 32,000 tokens.
Claude 2 can write codes based on written instructions. For reference, if you input the entire book and ask Claude 2 some questions based on their content, it can answer correctly. It’s in the same league as GPT-3.5 and GPT-4 models.
© Netflix
This type of tech allows companies to scan the details of your body and create a digital clone. While body scanning can help an army dearly in the situation of war, the technology is being used in the film industry the cut costs.
In 2018, Steven Rigsby and many other extras were working as background actors on a movie set in Atlanta when they became digital clones.
Hollywood’s biggest productions have relied on artificial intelligence and sophisticated graphic design software for visual quality and background work—the de-ageing of the movie stars, creating realistic animations and tweaking the performances of actors without reshooting.
Body scanning technology, in particular, will assist them in flushing out the big crowd in the scenes where an army of 10,000 people awaits for “cut”.
Rock the City for a Fair Contract took #solidarity to new heights in Times Square today! #SAGAFTRAstrong #1u Read more 👇https://t.co/YcetgnGGvi 📸: Shutterstock pic.twitter.com/ptjVDP14l4
Further advancements in generative AI will allow companies to replicate faces and voices with accuracy. This will become a huge problem for actors. Hence, the SAG-AFTRA strike.
© The Harvard Crimson
Starting this fall, in September, an AI instructor will be teaching a popular intro-level coding course at Harvard University, CS50.
“Our own hope is that, through AI, we can eventually approximate a 1:1 teacher:student ratio for every student in CS50, as by providing them with software-based tools that, 24/7, can support their learning at a pace and in a style that works best for them individually," CS50 professor David Malan told The Harvard Crimson, the university's student newspaper.
© iStock
As reported by the Crimson, Malan further clarified that course staff are “currently experimenting with both GPT 3.5 and GPT 4 models.” The course has always incorporated new software and employing an AI teacher is just an “evolution of that tradition.”
Malan said, “We'll make clear to students that they should always think critically when taking in information as input, be it from humans or software.”
© iStock
David Liddell, Principal Futurist at Teague, has claimed artificial intelligence-powered glasses that can soon offer humans the ability to see if people are lying to them or if they’re attracted to them.
The tech expert told Daily Mail that computer vision systems built into glasses will be able to pick up on details and emotional cues invisible to the naked eye.
© iStock
Lindell said to the publication, “Humans engage in many opportunities and advantage-seeking behaviours and they'll put these backchannel superpowers to use across all sorts of domains, from complex political negotiations to ordinary first dates.
“Early use cases will feature scenarios in which only one participant has backchannel superpowers, creating grossly uneven playing fields, so eventually, everyone will have them on some level.”
© The National News/RTA
Dubai is improving the city’s transport network and making its roads safer by using AI-powered laser technology that can detect road defects.
Dubai Roads and Transport Authority (RTA) patrol cars scan the emirate's road network to detect cracks and potholes, as small as 1 mm to reduce potential risks and inconvenience to the travellers.
The tech can identify up to 13 types of defects on the roads.
The director of roads and facilities maintenance at the RTA, Hamad Al Shehi, told The National, “The technology utilises advanced sensors and artificial intelligence algorithms to identify and analyse road flaws.”
Due to its high-resolution cameras and laser scanning features. The system instantly detects cracks after scanning the road surface.
© iStock
We’re only able to love or criticise a remix after the music industry releases it. And often one of the most censured tunes occupies the top spot on the hit charts. But that’s about the change drastically.
Until now, most agencies or review teams dealing with the hit or flop forecasting of a song were able to do so with around 50% of accuracy.
But recently some US-based researchers coupled advanced machine learning techniques and brain responses to identify potential hit songs with 97% accuracy based on listeners’ heartbeats, out of an average of 10,000 tunes that release every day.
The approach here is called neuroforecasting, which uses brain activity data to anticipate widespread trends.
Paul Zak, a professor at Claremont Graduate University and senior author of the study published in Frontiers in Artificial Intelligence said, “By applying machine learning to neurophysiologic data, we could almost perfectly identify hit songs.”
He added, “That the neural activity of 33 people can predict if millions of others listened to new songs is quite amazing. Nothing close to this accuracy has ever been shown before.”
A lot of mind-blowing AI tools have been launched in favour of the music industry. Based on how and by whom they’re used first, a lot of business can be brought in and a lot of it can be lost.
Instorier has recently released their new 3D tool that changes any 2D image into a high-definition 3D model. Kaedim, Nvidia and Dora AI are some of the other expert-level tools that can do so very effectively.
These easy-to-learn and use tools can change the world of storytelling in myriad ways—your use case can be as basic as a college presentation or as masterly as an advertisement.
© DragGAN
Another mind-blowing AI tool was released by DragGan AI tool which enables you to take any photo or object and manipulate it however you want. You drag the part of the photo you want to change and use the tools to precisely make the edits in the pose, shape, expression and layout of any object.
Change any image you want into whatever you want. Of course, we’re sure no one’s going to misuse the tool to generate inappropriate content because anything like that never happens on the internet, right?
Drag Your Gan was created by researchers from MIT, UPENN, Google and more.
Meet Belle, an autonomous fish robot designed to roam the seas and collect valuable data about marine life without disturbing its environment.
The purpose of the bot fish is to help study underwater organisms and the factors that affect them such as overfishing and climate change.
This is according to a report by Euronews and Reuters published on Friday.
Leon Guggenheim, a mechanical engineering student at ETH Zurich, the Swiss Federal Institute of Technology, told Reuters, “We want to capture the ecosystems the way they actually behave.”
Somatic, a commercial cleaning robotics company has launched their public restroom clean machine that’s powered by AI. The bot aims for higher quality cleaning, lower staff turnover and 50% cost saving.
The company currently charges $1k (Rs 82,000 approx) a month for an 8-hour-a-day/40-hour-a-week shift.
Nvidia, an end-to-end platform for the development and deployment of software-defined autonomous vehicles, has made it so much easier for shy people to look into the camera without re-clicking or reshooting.
The latest version of Nvidia's video conferencing software, Nvidia Broadcast version 1.4 features two new tools, one of which includes automatic gaze adjustment.
Eye Contact is an AI effect, which estimates and aligns the gaze of a person looking at the camera while retaining their natural eye colour and blinks.
© YouTube_Alp Guler
Researchers at Carnegie Mellon University developed a method using WiFi routers for detecting the three-dimensional shape and movements of human bodies in a room.
DensePose, a system for mapping all of the pixels on the surface of a human body in a photo, was also used to detect the figures. According to a recent paper published on arXiv, they developed a deep neural network that maps WiFi signals’ phase and amplitude sent and received by routers to coordinates on human bodies.
This allowed researchers to “see” people without using cameras. The Carnegie Mellon researchers wrote that WiFi signals “can serve as a ubiquitous substitute” for normal RGB cameras when one wishes to sense people in a room they’re not in. Using WiFi surpasses obstacles like poor lighting and occlusion that regular camera lenses face.
“In addition, they protect individuals’ privacy and the required equipment can be bought at a reasonable price,” they added.
© Walletmor
Microchip implants were here in 2022, so this may not surprise you as much as the other AI developments.
The chip allows you to make your payment via hand regardless of whether you carry your wallet, cards or phone or not. You simply place your hand near the contactless card reader and it’s done.
In 2021, British-Polish firm Walletmor became the first company to offer implantable payment chips for sale.
Walletmor's chip weighs less than a gram and is only a little bigger than a grain of rice. It is comprised of a tiny microchip and an antenna encased in a biopolymer—a naturally sourced material, similar to plastic.
The makers claim that it is entirely safe, works immediately after being implanted and stays firmly in place. It does not require a battery or other power source.
The trend of adding chips beneath the skin is now picking up among some early adopters in Japan.
The same technology is heading towards your house keys, car keys and many other areas. So, be ready!
© Shift
Mind you, these are walking shoes. Do not confuse them with skates. Moonwalker, created by Shift, might be the future of walking. AI-powered shoes allow you to walk at the speed of run—2.5x faster than normal without exerting any extra effort on your part.
According to the makers, Moonwalkers start in Lock Mode, the electronic brake in the shoes fully locks the wheels. As per the instructions shared on Kickstarter, “To enter shift mode that lets you walk at the speed of a run, you need to lift your right heel in the air and rotate it clockwise towards your left leg while keeping your toe on the ground.
© Shift
“To go back into lock mode, lift your right heel in the air and then back down to the earth as usual. Now you are ready for stairs, buses, trains, or anywhere else where you do not want to walk at the speed of a run.”
© Paragraphica
Meet Paragraphica, a lensless AI camera that uses location data and artificial intelligence to generate imagery.
According to Bjorn Karmann, its inventor, “The camera displays a description of your current location, utilising the address, weather, time of the day and nearby places. When pressing the trigger, the camera will create a photographic representation of it using a text-to-image AI.”
2. Redream AI will make You an Anime in Real Time 🤯You can generate images in real-time while shooting with iPhone.Github: https://t.co/B9LWdNsTMJ pic.twitter.com/rA8ElVOoXq
If you’re an anime fan, you’re going to love this one. reDream AI tool allows you to convert any object into anime in real-time. Once you get a hang of the too, you can create anime from your phone. You don’t need any special skills or complicated tutorials after you’ve mastered the tool.
© iStock
It is not news that it takes years and years of studying and hard work to decode a language that’s dead.
It took 23 years to crack the Egyptian hieroglyphics on the Rosetta Stone, nearly two centuries to understand Mayan glyphs and over 3,000 years to reveal the earliest form of Greek, Linear B.
But in May, an interdisciplinary group of computer science and history researchers published an article describing how they had created an AI model that can translate the ancient glyphs from dead languages like Akkadian Cuneiform in seconds.
The team that was led by a Google software engineer and an Assyriologist from Ariel University, trained the model on existing translations using the same technology as the one that powers Google Translate.
© iStock
The team wrote, “Hundreds of thousands of clay tablets inscribed in the cuneiform script document the political, social, economic, and scientific history of ancient Mesopotamia.
Yet, most of these documents remain untranslated and inaccessible due to their sheer number and limited quantity of experts able to read them.”
The team also shared that cuneiform AI’s translations still had mistakes. But despite occasional errors, the tool saved huge amounts of time and human labour in its initial processing of the texts.
India's first autonomous vehicle 'zPod' launched by Bengaluru start-up 'Minus Zero'. @teamminuszero pic.twitter.com/gBGlzudXsp
zPod is India’s first self-driving vehicle unveiled by Minus Zero, a Bengaluru-based AI startup.
“With true vision autonomy coming to the fore, one can make autonomous vehicles a reality, solving major pain points of the mobility paradigm,” stated CEO Gagandeep Reehal at the launch.
The autonomous four-wheeler does not have a steering wheel, AI system and features strategically-placed high-resolution cameras that help the vehicle in analysing driving conditions, including traffic or an animal passing by.
In his interview with Business Today, the founders shared that they do not intend to make cars. They said, “We are not an OEM; we do not plan to build cars. It is a vehicle developed to showcase the system we have built. We believe not one company or country alone can build and develop the concept of autonomous vehicles; it needs the whole ecosystem to come together. It is just the beginning.”
“Our concepts allow automakers to explore new design possibilities for vehicles, currently limited by the constraints of a driver-led design,” added Kalra.
© Calligraphy.ai
Calligrapher.ai is a web-based AI tool that lets users generate realistic computer-generated handwriting. The platform uses a recurrent neural network to convert text into handwriting with a range of print and cursive styles. In the future, we can expect AI models or bots that can mimic human handwriting as is and help students with their homework.
© MensXP X Eluna.ai
There are plenty of AI logo generators out there that are hankering for your attention, but I found eluna.ai to be quite simple and productive.
All you have to do is sign in, go to Reimagine, choose a model, upload the pre-existing or pre-designed logo and enter a prompt. It could be something as crazy as “the aerial shot of a forest”. Click on generate and you’ll have a design you’ll be happy with. You can be specific and change your prompt based on what you liked and what you didn’t in the previous result.
So, so many AI music generators are available that it has become simple to create tunes without any specific expertise.
Google recently announced MusicLM, an AI platform that can generate high-fidelity music through text prompts such as “a calming sitar melody backed by rhythmic drum and whistle; this song should create a soothing and adventurous atmosphere while being danceable”.
The product, which has not yet been released due to copyright issues, is built on a neural network and coached on a massive music data set of over 2,80,000 hours of music.
Other music-generating techs like AIVA, Amper Music and Alysia can also read and interpret audio samples, even when these are in the form of a whistle or humming.
As you continue to instruct such AI software to alter the pitch, volume, tone, instrument and accent as per your liking, the AI model learns from your choices and adjusts the sensitivity and the taste of your future requests accordingly.
An Israeli startup Deepdub partnered with MiLa Media, a New York-based studio to localise Every Time I Die, a 2019 Netflix feature film.
Deepdub used its AI speech synthesis technology to dub the movie in different languages, such as Spanish, Brazilian, Latin American and Portuguese.
Co-Founder and CEO of D-ID, Gil Perry told Forbes, “In seven to ten years, there will be a huge disruption in the media and entertainment market, in how media is produced…We decided to lead this disruption, to be the first to make a full Hollywood production with AI. Just like Pixar and animation.”
In 2019, The Wall Street Journal examined China's increased use of advanced AI technology in classrooms and brought it forth in a video report.
Schools are increasingly leveraging brain-wave trackers in China to gather information on student health and engagement.
While many teachers and parents see the technology as tools to improve grades, the tool has actually raised a serious privacy concern.
According to a 2019 Asia Times report, Chinese scientists have already created an AI-based 500-megapixel cloud camera system “to capture thousands of faces at a stadium in perfect detail.”
© Ghost Robotics
This one is straight out of a Black Mirror episode. Scientists have been working on bio-hybrids (a mix of animals and machines) for a while now and several species like pigeons, fishes and moths have already been chipped.
Now, the US and Australian military are learning to control robot dogs.
According to Gavin Kenneally, chief product officer at Ghost Robotics, a Philadelphia-based company that provides solutions for commercial and military partners, a 100lb four-legged robot with no head is designed to trek in all types of natural terrain including sand, rocks, and hills but also on human-built environments like stairs.
Kenneally also shared, “It has the ability to feel through its motors and can estimate friction forces and automatically correct for uneven or slippery ground.”
© Instagram/US Air Force
The US Air Force is testing AI-controlled fighter jets called Neural Net that can fly without a pilot. They’ve successfully completed 12 AI-led test flights using a modified F-16 fighter jet named VISTA X-62A.
Such tests are a part of the Skyborg program that intends to develop unmanned combat aerial vehicles. The goal is not to replace human pilots but to augment them with AI.
ElevenLabs, a tech startup that offers voice cloning features, is becoming alarmingly good and popular. The Polish company has recently released their multi-lingual support.
Using a deep-learning algorithm for speech synthesis, it can translate text to speech in anybody’s voice and any emotion and is currently available in English, French, German, Spanish, Hindi and Polish.
For reference, let’s just say don’t be confused if you see a video of Leonardo DiCaprio giving a speech in Kim Kardashian’s voice and diction.
OTV, an Orissa-based news channel introduced India to their AI anchor Lisa, who can speak multiple languages, a while back. She has been presenting news in Odia and English for OTV and its digital platforms.
© Twitter_Cutiecaryn
If you haven’t already met Caryn AI, here’s a brief introduction.
Caryn, the AI clone of a 23-year-old Snapchat influencer Caryn Marjorie, charges $0.60 or $1 per minute to talk to you based on the plan chosen. And yes, she was designed to become your perfect AI girlfriend. It had made $72K (nearly Rs 60 lakh) in a week.
The chatbot was trained on Marjorie’s voice and responds to messages from users on the app Telegram. The bot uses OpenAI’s GPT-4 for responses.
At Fuyang West Railway Station in China, an AI-powered sprinkler system was recently tested. It can detect the heat sources when the surrounding air reaches 165 degrees Fahrenheit. The detection triggers the release of water through a network of sprinkler heads connected to a distribution piping system.
Developed by a team of researchers, the robotic third thumb is designed to be worn on hand and provide users with extended dexterity.
Yes, AI has the potential to enter a professional kitchen and make the perfect-tasting pizza for you.
The world’s first autonomous pizza robot has been named Pazzi, meaning ‘crazy’ in Italian. It carries out every single process required for pizza-making on its own and has been making 80 pizzas in an hour since 2021.
© Coors
A few years back, Molson Coors, a beverage company used the “targeted dream incubation” method to play advertisements in people’s dreams. It irked 40 scientists to sign an open letter to warn everyone against the commercial use of dream advertising.
More and more marketers are in favour of using targeted dream techniques for advertising by 2025 without realising that TDI advertising is not some fun gimmick, but a slippery slope with real consequences.
The scientists, of course, are working on regulating the system and advocating policies that can “keep advertisers from manipulating one of the last refuges of our already beleaguered conscious and unconscious minds: Our dreams.”
SLAIT, a school for ASL learners, started back in 2021 as the sign language real-time AI translator. It could recognise the most common signs and help an ASL speaker communicate more fluently with someone who doesn't know the language.
But the process halted as the team realised they needed more time, money and data.
“We got great results in the beginning, but after several attempts we realised that, right now, there just is not enough data to provide full language translation,” Evgeny Fomin, CEO and co-founder of SLAIT told TechCrunch.
Recently, a third-year engineering student at Vellore Institute of Technology Priyanjali Gupta has developed an AI model that is able to do the same in real-time.
According to Priyanjali’s GitHub post, she developed the AI model using Tensorflow object detection API that can translate hand signs using transfer learning from a pre-trained model named ssd_mobilenet.
During his segment on 60 Minutes, Arnav Kapur, an MIT graduate, introduced his invention called AlterEgo. The revolutionary device allows users to interact with the internet using only their internal voice commands.
For reference, you can order food without saying a word if you get your hands on AlterEgo.
AlterEgo transmits information to the user’s inner ear through vibrations by capturing the brain signals intended for vocalisation. In his interview, Kapur demonstrated his ability to effortlessly order a pizza using the mind-control device.
© iStock
The first AI tool trained on the dark web is here. A team of researchers has trained DarkBERT on data from the dark web, that’s intentionally hidden from regular search engines, to find new ways to combat cybercrime. The AI has been taught to analyse and recognise different types of content found on the dark web, including forums, marketplaces and discussions.
LaserWeeder is an AI and laser technology launched by Carbon Robotics in favour of farmers. It helps them produce healthy crops by eliminating weeds using a chemical-free approach.
According to Carbon Robotics, “High-resolution cameras and an onboard supercomputer identify crops and weeds in real-time. The system then uses powerful lasers to eliminate weeds without damaging crops, regardless of the time of day or weather conditions. This improves crop yield, reduces farming costs, and supports sustainable, organic farming practices.”
Many More To Come…