April 27th, 2023
A new take on AI as "Food For Thought" episode 2
-- In conclusion, is Just Another Tool
Astronomers have confirmed that Earth-like planets orbit nearby stars, but they’re too far away for humans to reach. That's where artificially-intelligent robots come in. On this episode of AI IRL, Bloomberg's Nate Lanxon and Jackie Davalos are joined by astrophysicist Neil deGrasse Tyson, who explains the ways in which AI can shed light on the universe’s great unknowns, help us explore new planets and discover new galaxies. (Source: Bloomberg)
April 26th, 2023, 8:30 PM EDT
April 13th, 2023
Op-ed: Planetary impacts, escalating financial costs, and labor exploitation all factor.
Dr. Sasha Luccioni is a Researcher and Climate Lead at Hugging Face, where she studies the ethical and societal impacts of AI models and datasets. She is also a director of Women in Machine Learning (WiML), a founding member of Climate Change AI (CCAI), and chair of the NeurIPS Code of Ethics committee. The opinions in this piece do not necessarily reflect the views of Ars Technica.
Over the past few months, the field of artificial intelligence has seen rapid growth, with wave after wave of new models like Dall-E and GPT-4 emerging one after another. Every week brings the promise of new and exciting models, products, and tools. It’s easy to get swept up in the waves of hype, but these shiny capabilities come at a real cost to society and the planet.
Downsides include the environmental toll of mining rare minerals, the human costs of the labor-intensive process of data annotation, and the escalating financial investment required to train AI models as they incorporate more parameters.
Let’s look at the innovations that have fueled recent generations of these models—and raised their associated costs.
By: Sacha Luccione - Ars Technica 04/12/2023
Read entire article ➡️ https://rafaelsalazar.io/blogs/news/the-mounting-human-and-environmental-costs-of-generative-ai
March 30th, 2023
A Changing Labor Market ~
The latest breakthroughs in artificial intelligence could lead to the automation of a quarter of the work done in the US and eurozone, according to research by Goldman Sachs.
The investment bank said on Monday that “generative” AI systems such as ChatGPT, which can create content that is indistinguishable from human output, could spark a productivity boom that would eventually raise annual global gross domestic product by 7 percent over a 10-year period.
But if the technology lived up to its promise, it would also bring “significant disruption” to the labor market, exposing the equivalent of 300 million full-time workers across big economies to automation, according to Joseph Briggs and Devesh Kodnani, the paper’s authors. Lawyers and administrative staff would be among those at greatest risk of becoming redundant.
They calculate that roughly two-thirds of jobs in the US and Europe are exposed to some degree of AI automation, based on data on the tasks typically performed in thousands of occupations.
Most people would see less than half of their workload automated and would probably continue in their jobs, with some of their time freed up for more productive activities.
In the US, this should apply to 63 percent of the workforce, they calculated. A further 30 percent working in physical or outdoor jobs would be unaffected, although their work might be susceptible to other forms of automation.
But about 7 percent of US workers are in jobs where at least half of their tasks could be done by generative AI and are vulnerable to replacement.
Goldman said its research pointed to a similar impact in Europe. At a global level, since manual jobs are a bigger share of employment in the developing world, it estimates about a fifth of work could be done by AI—or about 300 million full-time jobs across big economies.
The report will stoke debate over the potential of AI technologies both to revive the rich world’s flagging productivity growth and to create a new class of dispossessed white-collar workers, who risk suffering a similar fate to that of manufacturing workers in the 1980s.
Goldman’s estimates of the impact are more conservative than those of some academic studies, which included the effects of a wider range of related technologies.
A paper published last week by OpenAI, the creator of GPT-4, found that 80 percent of the US workforce could see at least 10 percent of their tasks performed by generative AI, based on analysis by human researchers and the company’s machine large language model (LLM).
Europol, the law enforcement agency, also warned this week that rapid advances in generative AI could aid online fraudsters and cyber criminals, so that “dark LLMs… may become a key criminal business model of the future.”
Goldman said that if corporate investment in AI continued to grow at a similar pace to software investment in the 1990s, US investment alone could approach 1 percent of US GDP by 2030.
The Goldman estimates are based on an analysis of US and European data on the tasks typically performed in thousands of different occupations. The researchers assumed that AI would be capable of tasks such as completing tax returns for a small business; evaluating a complex insurance claim; or documenting the results of a crime scene investigation.
They did not envisage AI being adopted for more sensitive tasks such as making a court ruling, checking the status of a patient in critical care, or studying international tax laws.
Published by Ars Technica - March 28, 2023
DELPHINE STRAUSS, FINANCIAL TIMES - 3/28/2023, 9:30 am
February 26th, 2023
A new class of incredibly powerful AI models has made recent breakthroughs possible.
HAOMIAO HUANG - 1/30/2023
Progress in AI systems often feels cyclical. Every few years, computers can suddenly do something they’ve never been able to do before. “Behold!” the AI true believers proclaim, “the age of artificial general intelligence is at hand!” “Nonsense!” the skeptics say. “Remember self-driving cars?”
The truth usually lies somewhere in between.
We’re in another cycle, this time with generative AI. Media headlines are dominated by news about AI art, but there’s also unprecedented progress in many widely disparate fields. Everything from videos to biology, programming, writing, translation, and more is seeing AI progress at the same incredible pace.
December 30th, 2022
As Elon Musk's Category 5 tweetstorm continues, the once-obscure Mastodon social network has been gaining over 1,000 new refugees per hour, every hour, bringing its user count to about eight million.
Joining as a user is pretty easy. More than enough ex-Twitterers are happy finding a Mastodon instance via joinmastodon.org, getting a list of handles for their Twitter friends via Movetodon, and carrying on as before.
But what new converts may not realize is that Mastodon is just the most prominent node in a much broader movement to change the nature of the web.
With a core goal of decentralization, Mastodon and its kin are "federated," meaning you are welcome to put up a server as a home base for friends and colleagues (an "instance"), and users on all instances can communicate with users on yours. The most common metaphor is email, where yahoo.com, uchicago.edu, and condenast.com all host a local collection of users, but anybody can send messages to anybody else via standard messaging protocols. With cosmic ambitions, the new federation of freely communicating instances is called "the Fediverse."
I started using Mastodon in mid-2017 when I faintly heard the initial buzz. I found that the people who inhabited a world whose first major selling point was its decentralized network topology were geeky and counter-cultural. There were no #brands. Servers were (and are) operated by academic institutions, journalists, hobbyists, and activists in the LGBTQ+ community. The organizers of one instance, scholar.social, run an annual seminar series, where I have presented.
The decentralization aspect that was such a selling point for me was also a core design goal for Mastodon and the predecessors it built upon, such as GNU Social. In an interview with Time, lead developer Eugen Rochko said that he began the development of Mastodon in 2016 because Twitter was becoming too centralized and too important to discourse. "Maybe it should not be in the hands of a single corporation,” he said. His desire to build a new system “was generally related to a feeling of distrust of the top-down control that Twitter exercised."
As with many a web app, Mastodon is a duct-taping together of components and standards; hosting or interacting with a Mastodon instance requires some familiarity with all of these. Among them, and the headliner at the heart of The Fediverse, is the ActivityPub standard of the World Wide Web Consortium (W3C), which specifies how actors on the network are defined and interact. Mastodon and ActivityPub evolved at about the same time, with Mastodon's first major release in early 2017 and ActivityPub finalized as a standard by the W3C in January 2018. Mastodon quickly adopted ActivityPub, and it has become such a focus of use that many forget that ActivityPub is usable in many contexts beyond reporting what users had for lunch. Like Mastodon, ActivityPub represents a rebellion against an increasingly centralized web. Christine Lemmer-Webber is the lead author of the 2018 ActivityPub standard, based on prior work led by Evan Prodromou on another service called pump.io. Lemmer-Webber tells Ars that, when developing the ActivityPub standard, "We were like the only standards group at the W3C that didn't have corporate involvement... None of the big players wanted to do it." She felt that ActivityPub was a success for the idea of decentralization even before its multi-million user bump over the last few months. "The assumptions that you might have, that only the big players can play, turned out to be false. And I think that that should be really inspiring to everybody," she said. "It's inspiring to me."
Standards setting
The idea of an open web where actors use common standards to communicate is as old as, well, the web. "The dreams of the 90s are alive in the Fediverse," Lemmer-Webber told me. In the late '00s, there were more than enough siloed, incompatible networking and sharing systems like Boxee, Flickr, Brightkite, Last.fm, Flux, Ma.gnolia, Windows Live, Foursquare, Facebook, and many others we loved, hated, forgot about, or wish we could forget about. Various independent efforts to standardize interoperation across silos generally coalesced into the Activity Streams v1 standard.
➡︎Read the complete article @ https://arstechnica.com/gadgets/2022/12/mastodon-highlights-pros-and-cons-of-moving-beyond-big-tech-gatekeepers/
December 24th, 2022
Please read complete article enclosed in the link.
Extremely informative writing for those of us who share our work on different Social media platforms.
Found on Mastodon post by
https://toot.cat/@StephanieZvan/109564832289331290
By: Alex Chen | Published in UX Collective
I wrote this how-to guide with the immensely helpful counsel and insights from Bex Leon and Robin Fanning, as well as through an online survey of Blind / low vision / visually impaired people.
What is an image description?
An image description is a written caption that describes the essential information in an image.
Image descriptions can define photos, graphics, gifs, and video — basically anything containing visual information. Providing descriptions for imagery and video are required as part of WCAG 2.1 (for digital ADA compliance).
It’s sometimes referred to as alt text since the alt attribute is a common place to store them.
While alt text and image descriptions are sometimes used synonymously, they’re not actually the same thing. Alt text refers to the text specifically added to the alt attribute, and is often short and brief. Image descriptions can be found in the alt text, caption, or body of the webpage and are often more detailed. For more about alt text and image descriptions, check out @higher_priestess on instagram.
Additionally, image descriptions are a gesture of care and an essential part of accessibility. Without them, content would be completely unavailable to Blind/low vision folks. By writing image descriptions, we show support of cross-disability solidarity and cross-movement solidarity.
How to write a good image description
Object-action-context
Something that I learned from talking to Bex is that there is a storytelling aspect to writing descriptions. It doesn’t necessarily make sense to go from left to right describe everything in an image because that might lose the central message or create a disorienting feeling. For that reason, I came up with a framework that I recommend called object-action-context.
The object is the main focus. The action describes what’s happening, usually what the object is doing. The context describes the surrounding environment.
I recommend this format because it keeps the description objective, concise, and descriptive.
It should be objective so that people using the description can form their own opinions about what the image means. It should be concise so that it doesn’t take too long for people to absorb all the content, especially if there are multiple images. And it should be descriptive enough that it describes all the essential aspects of the image.
October 22nd, 2022
AI Generators
I came across this disturbing article that will make the Art world evolve into infinite levels.
3-D machines have not replaced Sculptors.
Art molds to the different tools available to human beings thru time.
---Rage Against the Machine. Have you had enough of AI-generated art? Well, so have some artists. Business Insider took a deep dive into the case of artists who claim that their work is being copied by AI image generators. “I feel like something’s happening that I can’t control,” said artist Greg Rutkowski. He’s not alone. IGN reports that AT, a popular artist on Twitch, recently went viral after some on Twitter noted that a user named Musaishh had copied AT’s work, with plans to rework it using the platform Novel AI. According to IGN, “Musaishh has since deactivated their Twitter account after receiving backlash from social media users and artists alike.”
Greg Rutkowski is an artist with a distinctive style: He's known for creating fantasy scenes of dragons and epic battles that fantasy games like Dungeons and Dragons have used.
He said it used to be "really rare to see a similar style to mine on the internet."
Yet if you search for his name on Twitter, you'll see plenty of images in his exact style — that he didn't make.
Rutkowski has become one of the most popular names in AI art, despite never having used the technology himself.
February 22nd, 2022
How do you navigate an immersive virtual world that’s still mostly conceptual yet manages to generate billions of visits and even more dollars? Buckle up, because the deeper you go, the weirder it gets.
One of the most buzzed-about destinations of 2022 is barely developed and widely misunderstood, starting with the fact that it isn’t, strictly speaking, real. And yet despite that existential disadvantage, the metaverse has managed to attract some of the world’s biggest brands, from Sotheby’s to the NFL, who’ve set up shop in the virtual universe to drop capsule collections, mint NFTs and auction off multimillion-dollar digital artworks. Along the way, the metaverse also became the hottest concert venue of pandemic-struck 2021, with A-list performances by Ariana Grande, Lil Nas X and Justin Bieber, all in avatar form.
Which is all fine, but what is it? The term itself, coined in Neal Stephenson’s 1992 sci-fi novel Snow Crash, is already headed for middle age, while the technological capability to actually create a fully immersive, interconnected virtual world remains a dream locked inside the mind of a yet-to-be-imagined super-computer. Still, the headlines keep coming, from Ralph Lauren’s winter-themed virtual fashion retail village to the hyper-realistic “meta-human” avatars that are being generated by Epic Games’ Unreal Engine digital creation studio. For a universe that doesn’t yet exist, the metaverse is surprisingly, if intangibly, real.
By Josh Condon – 2/19/2022
🎧 on Spotify
https://anchor.fm/rafael-salazar70/episodes/Everything-You-Wanted-to-Know-About-the-MetaverseBut-Were-Too-Afraid-to-Ask-e1ekp3p
Read all at https://rafaelesalazar.wordpress.com/2022/02/19/everything-you-wanted-to-know-about-the-metaverse-but-were-too-afraid-to-ask/#more-2401
February 4th, 2022
The dispute revolves around the movement of a 2014 work, ‘Quantum’, from one blockchain to another and how that affects its ownership and fungibility
A Canadian company is suing Sotheby’s and artist Kevin McCoy over the sale of an early NFT (non-fungible token) for almost $1.5m.
The artwork, Quantum, was first minted in May of 2014 and is regarded by many, including the auction house, as the first-ever NFT. It sold for $1.47m in June 2021 for $1.47m during Sotheby’s “Natively Digital” auction. But in a complaint filed on 1 February in the distrcit court for southern New York, the plaintiff behind the Canadian company Free Holdings, of which this individual is the “sole member”, is claiming to be Quantam’s rightful owner and asserting that they had secured the rights to the work seven years after its creation after McCoy had let his ownership expire. The tech startup Nameless, which provided Sotheby’s with a condition report on the digital work prior to the auction, is also named as a defendant.
As the outlet Leger Insights explained, the quandary arose because Quantum was originally minted using NameCoin, a blockchain software modeled from Bitcoin’s code. Akin to the purchase of a domain name, NameCoins need to be renewed roughly every 250 days. After creating the work in 2014, McCoy did not renew Quantum in 2015, meaning that it could presumably be claimed by another individual. It sat un-renewed for six years until, in April of 2021, Axios ran an article with the headline “Exclusive: The first-ever NFT from 2014 is on sale for $7 million plus”, and roughly two weeks later an individual with the twitter handle @EarlyNFT registered as the owner of the dormant NFT.
According to court filings, “EarlyNFT is a pseudonym for Free Holding’s sole member.” This sole member then attempted to contact McCoy over the next month in a series of five tweets, which escalated in their hostility from “There’s a matter regarding your work ‘Quantum’ I’d like to discuss with you” on 6 April 2021 to, “Are you interested in participating in the sale of Quantum or not then?” paired with a Gif of Elmo shrugging, on 3 May 2021. McCoy did not respond to these tweets. According to the condition report listed on the auction lot, however, “this specific Namecoin entry was removed from the system after not being renewed, and was effectively burned from the chain”, meaning its ownership could not have been reregistered.
A still from Quantum (2014) by Kevin McCoy
© Kevin McCoy
January 17th, 2022
Nora N. Khan has emerged as one of the most important voices when it comes to all things related to art and technology. A Harvard grad with a degree in English and American Literature, she attended the most prestigious writing program in the country, the Iowa Writers’ Workshop, before swerving into arts criticism, philosophy and curating. Her book Seeing, Naming, Knowing, published by the Brooklyn Rail in 2019, investigated the impact of predictive algorithms and machine vision on the arts. Later that year, she became the Shed’s first guest curator, bringing together artists to respond to and critique emerging technologies that, purposefully or not, replicate systems of oppression with the celebrated exhibition “Manual Override.”
The rise of NFTs has provoked a mix of reactions among those who deal regularly with art and technology. Some immediately embraced this new medium, while others distanced themselves from it. Khan decided that she wanted to wait a bit to watch it develop, and now, as Topical Cream‘s editor-in-residence, she’s made her first foray into the NFT space.
With the NFT platform Foundation to curate a sale, she organized “Experimental Models,” which Khan views as a way to highlight the work of female and gender non-conforming new media artists. The sale has the potential to shake up the NFT space, which has, admittedly, not shown an appetite for conceptually rich pieces. It also could add a diverse array of voices to an area of the art world that has largely tended to uphold white, male artists. With works by Danielle Braithwaite-Shirley, Rachel Rossin, Umber Majeed, and others, the sale is, in part, intended to bring artists who’ve rarely or never worked with NFTs into the fold.
To hear more, Khan spoke with ARTnews about her thoughts on NFTs and her slow research process.
By: SHANTI ESCALANTE-DE MATTEI
ArtNews