Category: AI

January 27, 2025 Joseph Sassoon No comments exist
Photo by Prawny on Pixabay

Big Tech is everywhere – innovating, entertaining, and, let’s face it, raking in the cash. But can it be trusted to ever balance power with responsibility?

Big Tech has never been so popular. People love it, loathe it, and can’t seem to live without it. From streamlining our lives to shaping our digital playgrounds, these tech titans have given us so much to celebrate. Yet, with great power comes great responsibility – or so the saying goes. Accountability, anyone?

Indeed, Big Tech is the epitome of cool. It’s the VIP of the digital age, its leaders being given front-row seats at presidential inaugurations like rock stars at a music award. Admired? Often. Respected? Occasionally. Criticized? Oh, endlessly. While their innovations dazzle, their bank accounts tend to provoke more eye-rolls than applause. To many, their wealth isn’t just excessive – it’s a slap in the face. In fact, Big Tech has an image problem bigger than its market cap. Trust, or the lack of it, might just be their Achilles’ heel.

Tech That Sparkles

First, let’s give credit where it’s due. Big Tech companies have revolutionized nearly every aspect of our lives. They’ve connected us across continents, brought education to our fingertips, and transformed mundane tasks like shopping into an oddly satisfying one-click experience. (Who knew buying paper clips at 3 a.m. could feel so empowering?)

Think of the social good, too: AI systems diagnosing diseases early, cloud computing empowering small businesses, and digital platforms giving a voice to the voiceless. Let’s not forget the sheer entertainment value. Streaming platforms, gaming networks, virtual realities… Big Tech knows how to keep us hooked – and happy.

The Gray Areas: No Free Lunch

But let’s not pretend all this brilliance is purely altruistic. Big Tech’s currency isn’t just innovation; it’s us. Every click, every like, every late-night search feeds the machine. And that’s fine – up to a point. The question is, who’s holding the reins?

Yes, these companies provide incredible services, often for free. But we’ve all learned by now that ‘free’ always comes with a catch: your data. And while targeted ads for cat sunglasses can be amusing, the deeper implications – privacy concerns, misinformation, and the occasional ethical hiccup – are definitely worth considering.

Accountability: The Next Big Thing

Big Tech knows it needs to step up. The buzzwords are all there: transparency, responsibility, ethical AI. They’ve even launched initiatives to improve digital literacy, combat cyberattacks, and protect our privacy (sort of). And let’s be fair, many of these efforts are making a real difference.

The challenge? Accountability isn’t as flashy as launching a new gadget or platform. Regulation debates are thorny, and while Big Tech says it’s open to change, you can’t blame us for being a little skeptical. After all, it’s hard to write your own rulebook without a little bias.

How to Get There

Big Tech isn’t the villain in this story, nor is it the hero. It’s a complicated character, just like the rest of us. The services they provide are transformative, but the risks of unchecked power are equally profound. So what’s the answer? Balance.

Imagine a world where innovation flourishes but doesn’t trample over ethics. Where data is used responsibly, and transparency isn’t just a PR buzzword. It’s not an impossible dream, but it does require effort – from governments, companies, and yes, us, the users. Let’s strive to build a digital future that benefits everyone, not just a select few.

January 23, 2025 Joseph Sassoon No comments exist

Conversational AI is making waves, engaging in increasingly nuanced and dynamic dialogues. We’ll soon live in a world populated by countless synthetic voices, like a wood alive with fireflies under the night sky.

Let’s acknowledge the strides these systems have made. Modern chatbots are far more than augmented help desks. They can banter, convey irony, and even engage in philosophical musings. Ask them to generate a haiku or weigh in on perennial debates like pineapple on pizza (yes or no), and they’ll deliver – sometimes with impressive flair.

The core challenge for their advancements lies in the subtle, almost ineffable qualities of human communication – empathy, intuition, and the ability to read between the lines. While bots excel at synthesizing data and mimicking conversational styles, they don’t yet grasp the emotional weight or cultural implications of what they say. (Although, to be fair, neither do some humans on social media.)

That said, progress is relentless. Developers are refining the systems to include better context awareness, emotion-sensing optimization, and more sophisticated conversational timing. With every iteration, these bots inch closer to seamless interaction. It’s conceivable that, in the near future, they could become indispensable for tasks requiring advanced dialogue skills, from mental health support to creative collaboration.

Will they ever truly match humans? Opinions are divided. Some foresee a future where bots are indistinguishable from people, while others argue that human communication – anchored in lived experience – will always be beyond AI’s grasp. Regardless, conversational AI doesn’t need to replace us to be valuable. As brainstorming partners, knowledge assistants, and even budding poets, chatbots already enhance the way we think, work, and communicate more than we ever imagined.

Who knew algorhytms would be the ones to remind us that words still matter?

January 19, 2025 Joseph Sassoon No comments exist

Ah, progress. In a distant past, the greatest achievement in communication was the telegram, buzzing its dots and dashes across the globe. Today, machines don’t just send messages – they have entire conversations, and we’re not invited. Welcome to the quirky, unsettling, and dazzling world of machine-to-machine (M2M) communication.

M2M is the connective tissue of the Internet of Things (IoT). Imagine fridges talking to grocery stores about milk levels, or your car telling the traffic light it’s in a hurry. It’s a little like a gossip network, but instead of rumors, it’s all about data – lots and lots of data. From industrial robots to smart homes, M2M is the secret sauce that’s making everything smarter, faster, and occasionally creepier.

The Good Stuff: Convenience on Steroids

First, the good news: M2M is here to make life easier. In factories, machines share real-time updates, ensuring production lines run smoother than an ice skater. In healthcare, wearable devices quietly monitor vital signs and whisper updates to doctors without anyone lifting a finger. Even agriculture is getting in on the act, with soil sensors chatting up irrigation systems about whether the crops are thirsty.

It’s automation at its finest. Need your house to preheat the oven because it knows you’re on your way home? Don’t worry, M2M is there for you. Want your car to warn you about an accident up ahead? M2M’s already on it. It’s like having an army of tireless personal assistants who never demand a coffee break.

But Wait, There’s a Catch

Of course, every shiny new technology has a shadow side, and M2M is no exception. Let’s start with privacy – or rather, the lack thereof. These machines may be chatting amongst themselves, but they’re also hoarding data about you. Your habits, preferences, and maybe even your poorly-hidden addiction to late-night snacks are all fair game. Who’s listening? Corporations, hackers, governments… you name it.

And then there’s security. M2M systems are only as strong as their weakest link, and with billions of devices in the mix, there are plenty of weak links. Hackers love nothing more than turning your smart toaster into a weaponized bot soldier. Yes, it’s as ridiculous as it sounds – until it’s not.

Ethical Whirlwinds and Job Jitters

Let’s not forget the ethical dilemmas. Machines making decisions about humans? It’s happening. Autonomous vehicles deciding who gets priority in an unavoidable crash scenario. Smart surveillance systems flagging “suspicious” behavior. These are no longer the stuff of science fiction.

And what about jobs? M2M is automating tasks at a breakneck pace, leaving many wondering if their careers are next on the chopping block. Sure, someone needs to program and maintain these systems, but that’s cold comfort if you’re displaced by a machine that works faster and cheaper than you. 

So, What Now?

M2M isn’t inherently good or evil – it’s just a tool. A powerful one, yes, but whether it builds or destroys depends on how we use it. Transparency, robust security, and ethical guidelines are essential if we’re to harness its potential without succumbing to its pitfalls. In the meantime, approach M2M with a mix of awe and skepticism. Celebrate the smart conveniences, but keep an eye on the risks. And possibly, don’t let your fridge know too much about you – it might tell the dishwasher.

January 16, 2025 Joseph Sassoon No comments exist

Elon Musk is running an archetypal marathon, embodying more personas than your average mythological pantheon.

First, there’s the Creator – Tesla’s electric revolution proves his Promethean spark. Then the Explorer, shooting for the stars (literally) with SpaceX, his Mars colony dream burning brightly. Enter the Magician: Neurolink’s neural interface taps into our collective sci-fi fantasies, reshaping the mind itself. And, of course, the Ruler, commanding wealth, influence, and a front-row seat in the White House’s new era.

But don’t get too comfortable. Could Musk also be slipping into the Outlaw archetype? The disruptor. The industry shaker. The status-quo annihilator. Fans might call it progress; critics call it chaos. The real question? Whether this polyphonic blend of archetypes leads us to a utopia – or an interstellar dumpster fire. For now, all we can do is sit back and watch the man play every note.

January 13, 2025 Joseph Sassoon No comments exist

With an announcement that felt straight out of a sci-fi epic, at  CES 2025 (the most important tech event in the world) Jensen Huang, president and CEO of Nvidia, unveiled Cosmos, a family of “world foundation models” poised to reshape robotics and autonomous systems. These neural networks don’t just calculate or generate, they predict and create physics-aware virtual environments and tools. Yes, the machines are learning not just to think but to move – because why stop at taking over the Internet when you can conquer the physical world?

“The ChatGPT moment for robotics is coming,” Huang declared, setting the stage for what might be the next great leap in AI. Like language models before them, world foundation models (WFMs) like Cosmos promise to be transformative, enabling next-gen robots and autonomous vehicles that won’t just stumble through your living room but will navigate it with uncanny precision.

To ensure this revolution isn’t reserved for the privileged few, Nvidia is open-sourcing Cosmos. It’s a bold move, putting these tools in the hands of developers everywhere. “We created Cosmos to put general robotics in reach of every developer,” Huang explained, imagining a world where robots are not only smarter but more widely accessible.

At its core, Cosmos is about realism. These WFMs combine data, text, images, video, and motion to create virtual environments so accurate you might start mistaking the simulation for reality. But this isn’t just about creating pretty virtual worlds – it’s about teaching machines how to understand and interact with the real one. From physical interactions to environmental navigation, these models represent a foundational shift in what AI can do.

This perspective is undeniably ambitious and speaks to a broader shift in how AI could impact the physical landscape. If large language models revolutionized the way we process and generate information, world foundation models aim to do the same for robotics and autonomous systems. But are robots truly poised to make this substantive leap into real-world applications? There are promising signs that they are:

  1. Improved simulation capabilities. The ability to simulate complex physical environments with high accuracy is a game-changer. Platforms like Cosmos signal that we are closing the gap between training in a virtual space and performing in the real world.

  2. Advances in multimodal learning. Huang’s emphasis on combining data from text, images, video, and movement is aligned with the AI trend of multimodal models. By integrating diverse types of input, WFMs can develop a nuanced understanding of the world, making them better suited to handle dynamic environments.

  3. Open-source democratization. Nvidia’s decision to open-source Cosmos is a sign that physical AI is moving from niche research labs to broader developer communities. This democratization could accelerate innovation, with startups, researchers, and even hobbyists contributing to the evolution of robotics.

  4. Emerging applications. Autonomous vehicles, warehouse robots, and drones are already functioning in semi-controlled real-world environments. The tools provided by Cosmos could help extend these capabilities to less structured spaces, such as homes, cities, or disaster zones.

  5. Economic and industry pressure. Robotics development is no longer a theoretical exercise. Industries like logistics, healthcare, and agriculture are actively seeking AI-driven solutions to labor shortages, efficiency bottlenecks, and environmental challenges. This demand is driving funding, research, and practical deployment.

That said, big jumps into the real world are rarely smooth. Robots must contend with unpredictable human behavior, complex environments, and the need for safety and reliability. Transfer learning (moving knowledge from a simulated environment to the real world) remains a technical hurdle. Ethical and regulatory frameworks are also playing catch-up with the pace of technological progress.

Still, Huang’s vision of WFMs as the “missing link” in robotics isn’t just marketing – it’s a reflection of a tangible trend toward AI systems that are not only intelligent but also physically capable. While Cosmos might not single-handedly bring about the “ChatGPT moment” for robotics, it represents a meaningful step toward that goal. The leap into the real world will depend on whether these advances can translate into scalable, reliable, and widely adoptable systems. What’s clear, though, is that we’re no longer asking if this leap will happen, but when.

January 6, 2025 Joseph Sassoon No comments exist

Hold onto your notebooks – Gen AI is shaking up the storytelling scene, but let’s not hand over the Pulitzer just yet. While it can whip up passable narratives faster than you can say “once upon a time,” the leap from formulaic plotlines to soul-stirring tales remains elusive.

Sure, it can mash together tropes, predict the next plot twist, and mimic your favorite authors with uncanny precision. But ask it to craft a fresh, boundary-pushing masterpiece? That’s where things get… complicated. Gen AI excels at remixing, not inventing from scratch. It can generate a “new” fairy tale, but chances are it borrows heavily from something already in the public domain.

Why the gap? Ask the tools directly, they are aware of the problem. Emotion. Subtlety. That ineffable spark of lived experience. Gen AI reads patterns, but it doesn’t “feel” them the way humans do. Even when it stumbles into brilliance, it’s more happy accident than intentional artistry.

But don’t count it out. The storytelling bots are learning – fast. Future models could integrate sensory data, emotional mapping, and feedback loops that sharpen their narrative instincts. Some predict AI capable of literary-grade fiction within the decade. Others argue that the real breakthrough won’t come from AI replacing authors, but collaborating with them – a digital muse with infinite patience.

Until then, Gen AI remains a plot assistant, not the auteur. It’s a co-pilot for brainstorming sessions, a generator of interesting (if occasionally bizarre) first drafts. The future of storytelling in the foreseeable future? It’s still being written – by humans.

April 8, 2020 Joseph Sassoon No comments exist

The dynamics of the coronavirus crisis is proving that Bill Gates had it right. In a now-famous TED talk video dated April 3, 2015, he predicted that “if anything kills over 10 million people in the next few decades it’s most likely to be a highly infectious virus rather than a war”. And he was right on another crucial point – when he argued that “we’re not ready” to face such a catastrophe.

Our unpreparedness, which has become evident in the questionable way the crisis is being handled in many countries, has led to social distancing and personal isolation. This, in turn, has brought about an unprecedented reliance on digital tools that are helping us communicate and support the feeling of togetherness.

The push to digitalize is indeed one of the few benefits of this crisis, but it should not be overemphasized. It may be true that even grandma is learning to use Zoom, and that you can definitely hold that meeting online instead of going to the office. Surely, thanks to COVID-19 many are discovering how digital can be useful to their lives and work. The proof of this is the feeling of deprivation among the families and individuals who do not have computers at home (yes, there are more of them than you may think).

However, we should not celebrate this development as an all-encompassing solution for several reasons.

The first one is that, as humans, we are not made to live physically isolated for long. As Angela Dewan wrote on CNN, Humans Are Terrible at Social Distancing. Probably because touch is the first sense that a baby develops in the womb, we really like to be with other people and exchange handshakes, hugs and kisses. Touching each other releases the same chemicals in the brain and body (endorphins, etc.) that make us happy. This experience is what we miss in teleconferencing, and we’ll do whatever it takes to get it back.

In addition to that, the war against coronavirus must clearly be won in the physical world. When indicating viruses as the biggest threat to mankind, Bill Gates also suggested that the answer to this challenge has to be based on better international coordination among health systems, with the deployment of a rapid healthcare force, and possibly with some support from the military and its logistics capabilities – something that requires a lot of very concrete efforts and investments.

Another reason relates to the fact that the digital world is not exempt from limits and risks. Clearly the advancements of AI, machine learning, and robotics are improving our lives in countless ways and, hopefully, they may soon provide the way to beat this damned virus and other diseases. However, the notion of digitalizing all work doesn’t make sense.

Why? Because we’ll still need the hairdresser. Because being compelled to work digitally for 8 hours a day is alienating. And because technology is progressing so quickly that there’s a huge risk of losing control.

In fact, just as we have been hit by a virus in the real world, we could well be devastated by a very smart malevolent virus or unforeseen lethal algorithm in the digital one.  Elon Musk has been warning us of this danger for years (and Bill Gates too). Computers are now writing their own algorithms, and they are so complex that the human mind cannot comprehend them. In the long term Artificial Intelligence will become smarter than us, therefore relinquishing our grip on the physical world to transfer most of our activities online doesn’t seem a promising idea.

In brief, this is another area where people, organizations, and governments “are not ready”. Finding the right balance between physical and digital will simply become vital, and we should make our preparations to get there as fast as we can.

February 13, 2020 Joseph Sassoon No comments exist

Un articolo di James Freeze uscito due giorni fa su Forbes fa il punto sulle prospettive delle voci degli assistenti digitali. Freeze, che è Chief Marketing Officier di una società statunitense impegnata nel costruire assistenti virtuali voice-enabled, vede un immenso potenziale nelle capacità delle tecnologie legate alla voce di trasformare il modo in cui accediamo all’informazione e ci relazioniamo alle marche.

(more…)

January 20, 2020 Joseph Sassoon No comments exist

La presentazione dei Neon a CES 2020, il grande evento sulla tecnologia tenutosi a inizio Gennaio a Las Vegas, ha fatto sensazione. Creati da Star Labs, il dipartimento di ricerca tecnologica avanzata di Samsung, i Neon sono stati descritti come “esseri virtuali creati con tecnica computazionale che appaiono e si comportano come umani reali, con la capacità di mostrare emozioni e intelligenza”.

Cosa sono esattamente i Neon e in cosa si differenziano dagli assistenti digitali che ci siamo abituati a conoscere (come Siri o Alexa, oppure Bixby, il digital assistant di Samsung)? I Neon, dice l’azienda, “sono più come noi, esseri viventi virtuali ma indipendenti, che possono mostrare emozioni e imparare dall’esperienza”.

Progettati per avere conversazioni in tempo reale e comportarsi come umani, i Neon non hanno (per ora) un corpo ma su uno schermo appaiono davvero quasi indistinguibili da come possono apparire degli esseri umani reali. Non la sanno lunga un po’ su tutto come i migliori assistenti digitali, tuttavia sono in grado di acquisire competenze specializzate, sviluppare memorie e interagire in un’ampia serie di compiti che richiedono un tocco umano.

Per quanto ancora in fase sperimentale, i Neon sembrano rappresentare un notevole passo avanti nella direzione dei chatbot conversazionali con caratteristiche umanoidi sempre più sofisticate. Cosa possono fare, e in che ruoli potrebbero essere impiegati? Secondo Pranav Mistry, il CEO di Neon e direttore di Star Labs, essi possono anzitutto essere “dei nostri amici, collaboratori o compagni”. Poi possono agire come umani artificiali in una quantità di ruoli in cui la componente emozionale è importante, quali insegnanti, operatori sanitari, receptionist, maestri di yoga, consulenti finanziari, portavoce, ma anche cronisti televisivi, pop star o attori cinematografici.

Per le aziende, naturalmente, i Neon potranno assumere una serie di ruoli, tutti da disegnare, anche nel customer care o come brand ambassador. Tale prospettiva si inserisce perfettamente nella traiettoria di sviluppo delle intelligenze conversazionali accennata sia nel mio ultimo libro Storytelling e Intelligenza Artificiale (FrancoAngeli 2019) sia nel testo del mio amico e collega Alberto Maestri Platform Brand (FrancoAngeli 2019).

Dalle supermodel virtuali di Balmain (che hanno lavorato per il lancio della stagione autunnale 2018 conquistando la copertina di Vogue) ai Neon il passo è breve, ma c’è di mezzo un grande salto sul terreno della tecnologia. Come avevamo notato nei nostri libri, le macchine stanno rapidamente evolvendo nel dare luogo a creature artificiali con un’eccezionale human-like appearance e in grado di interagire con noi in modi sempre più profondi e complessi.

Nel caso dei Neon, tale interazione è grandemente facilitata dal fatto che il loro tempo di risposta nella conversazione con gli interlocutori è di pochi millisecondi – ovvero, è in real time. E la loro capacità di esprimere emozioni, ad esempio, di sorriderci quando opportuno, non potrà che accrescerne l’interesse e l’attrattiva.

I Neon sono stati al centro del buzz a CES 2020, malgrado la tecnologia non sia ancora disponibile (andrà in beta test nei prossimi mesi) e nonostante alcune perplessità riguardanti il piano della privacy. In un comunicato stampa, Mistry ha cercato di rassicurare che i Neon sono stati progettati avendo i temi della privacy e della fiducia al primo posto. E ha osservato che “abbiamo sempre sognato questi esseri virtuali nella fantascienza e nel cinema. I Neon si integreranno col nostro mondo e serviranno come nuovi link a un futuro migliore”. Non tutti sono rimasti convinti, ma il progetto di Star Labs non si fermerà per questo. Pertanto, stay tuned.

May 3, 2019 Joseph Sassoon No comments exist

Alcune settimane fa DeepVogue, un software di intelligenza artificiale di Shenlan (‘Deep Blue’) Technology ha vinto il secondo premio all’International Competition of Innovative Fashion Design di Shanghai. Un concorso internazionale importante, al quale hanno partecipato 15 scuole di fashion design altamente qualificate come ESMOD, Istituto Europeo di Design, Tsinghua University Academy of Arts & Design e la China Academy of Art.

La vittoria è significativa perché è la prima volta che accade; e perché è un’altra prova del fatto che l’AI sta guadagnando capacità crescenti di operare nel campo della creatività. Certo la tecnologia di DeepVogue, come hanno riconosciuto i rappresentanti di Shenlan Technology, richiede notevoli input da parte di stilisti umani; ma poi il sistema utilizza il ‘deep learning’ (una tecnologia avanzata basata su reti neurali) per studiare ampi database di informazioni e quindi produrre modelli originali. Modelli a quanto pare di tale eleganza da conquistare il panel di 50 giudici della manifestazione, inducendoli a conferire a DeepVogue anche il “People’s Choice Award”.

Come ha notato un articolo di Enterprise Innovation, DeepVogue è stato costruito per verificare se l’AI possiede oggi quelle capacità di pensiero non lineare e ‘talento creativo’ necessari per dare vita a modelli in grado di figurare bene nelle più grandi sfilate di moda al mondo. Il risultato è decisamente positivo e sembra inaugurare un’epoca in cui la fashion industry sarà sempre più guidata dai driver paralleli dell’innovazione tecnologica e della creatività culturale.

Che l’AI riesca in questo campo, così sottratto alla logica, indubbiamente colpisce. La moda, nelle sue infinite variazioni di lunghezze, fogge, tagli, colori,  ha sempre avuto qualcosa di ineffabile, ciò che è all’origine dei costi a volte assurdi delle creazioni ritenute migliori. Ma se alcuni stilisti di gran successo possono vantare il tratto del genio, le macchine hanno dalla loro quello della potenza di elaborazione. DeepVogue ad esempio distingue 16 milioni di colori, e questo probabilmente può avvantaggiarlo nella scelta della tonalità vincente nella prossima stagione.

Questa incursione dell’intelligenza artificiale sul terreno del fashion design avviene nello stesso momento in cui la tecnologia fa passi straordinari sul terreno dello storytelling (come ho raccontato nel mio ultimo libro). Dimostrando in modo sempre più convincente, se ce ne fosse ancora bisogno, che gli algoritmi, pur basati su codici e principi matematici, hanno ottime carte da giocare anche quando si tratta di colpire la fantasia e l’immaginazione umana.