Embrace the Summer Vibes: Explore New Artworks by Rafael Salazar This Season! 🌞🎨
mastodon pinterest threads twitter

Blog

Displaying: 1 - 10 of 82

  |  

Show All

  |

[1]

2 3 4 5 6 Next

Photographer Disqualified From AI Image Contest After Winning With Real Photo

June 13th, 2024

Photographer Disqualified From AI Image Contest After Winning With Real Photo

By Matt Growcoot - Article from PetaPixel June 12, 2024

A photographer has been disqualified from a picture competition after his real photograph won in the AI image category.

“I wanted to show that nature can still beat the machine and that there is still merit in real work from real creatives,” Astray tells PetaPixel over email.
“After seeing recent instances of AI-generated imagery beating actual photos in competitions, I started thinking about turning the story and its implications around by submitting a real photo into an AI competition.”

The color photography contest is judged by people who work for The New York Times, Getty Images, Phaidon Press, Christie’s, and Maddox Gallery, among others. None could apparently tell that Astray’s photo was real.

The 1839 color photography contest has numerous categories with AI being unusual as it is the only one that is not camera-based. The rest are more familiar photography subgenres such as “Architecture”, “Still Life”, and “Film/Analog.”

In an email to PetaPixel, the competition’s organizers said that while it appreciates Astray’s “powerful message”, his entry has been disqualified in consideration for the other artists.

“Our contest categories are specifically defined to ensure fairness and clarity for all participants,” says a spokesperson.

“Each category has distinct criteria that entrants’ images must meet. His submission did not meet the requirements for the AI-generated image category. We understand that was the point, but we don’t want to prevent other artists from their shot at winning in the AI category.

“We hope this will bring awareness (and a message of hope) to other photographers worried about AI.”

To read all and more from PetaPixel : https://petapixel.com/2024/06/12/photographer-disqualified-from-ai-image-contest-after-winning-with-real-photo/

Grok AI can now Understand Images

April 14th, 2024

Grok AI can now Understand Images

I recently came across this news article on Mashable, which I found to be relevant to all especially for us Artists and public in general.

An Eye Opener for sure available later this week.

I invite all to read & if you'd like to share your thoughts on this blog or my Flipboard Magazine on Artificial Intelligence, you are welcome to do so @ https://flipboard.com/@rafael_salazars/artificial-intelligence-ai-tas815ioz

#artificialintelligence #ai #technology #computerscience #chatgpt #openai

Here is a sample of it. - Mashable by Chase DiBenedetto - image by Credit: Jaap Arriens/NurPhoto via Getty Images

➡️ Uh-oh, X's Grok AI can now 'understand' images

New "Vision" Grok will be available to testers and select users.

Elon Musk's AI chatbot can now "understand" images, including information-riddled diagrams and charts. Sorry, doesn't everyone use the platform once known as Twitter for multi-disciplinary research and optimizing their work flows??

Introduced as Grok-1.5V — Or Grok 1.5 "Vision," the company's "first-generation multimodal model" — the bot will be able to not only respond to your uploaded pictures and screenshots but also reason through complex documents, science diagrams, charts, screenshots, and photographs, the company says. Additionally, Grok-1.5V will gain "real-world spatial understanding" to better understand the physical world depicted in the images uploaded by its users.

"Advancing both our multimodal understanding and generation capabilities are important steps in building beneficial AGI that can understand the universe," the company wrote in its' announcement. "In the coming months, we anticipate to make significant improvements in both capabilities, across various modalities such as images, audio, and video."

🧐SEE ALSO: If you're a paying X user, Elon Musk wants his Grok AI to write your posts for you, report says

Source: Mashable by Chase DiBenedetto - image by Credit: Jaap Arriens/NurPhoto via Getty Images

New Data Poisoning Tool Enables Artists To Fight Back Against Image Generating AI

March 1st, 2024

New Data Poisoning Tool Enables Artists To Fight Back Against Image Generating AI

Artists now have a new digital tool they can use in the event their work is scraped without permission by an AI training set.

The tool, called Nightshade, enables artists to add invisible pixels to their art prior to being uploaded online. These data samples “poison” the massive image sets used to train AI image-generators such as DALL-E, Midjourney, and Stable Diffusion, destabilizing their outputs in chaotic and unexpected ways as well as disabling “its ability to generate useful images”, reports MIT Technology Review.

For example, poisoned data samples can manipulate AI image-generating models into incorrectly believing that images of fantasy art are examples of pointillism, or images of Cubism are Japanese-style anime. The poisoned data is very difficult to remove, as it requires tech companies to painstakingly find and delete each corrupted sample.

We assert that Nightshade can provide a powerful tool for content owners to protect their intellectual property against model trainers that disregard or ignore copyright notices, do-not-scrape/crawl directives, and opt-out lists,” the researchers from the University of Chicago wrote in their report, led by professor Ben Zhao. “Movie studios, book publishers, game producers and individual artists can use systems like Nightshade to provide a strong disincentive against unauthorized data training.”

Nightshade could tip the balance of power back from AI companies towards artists and become a powerful deterrent against disrespecting artists’ copyright and intellectual property, Zhao told MIT Technology Review, which first reported on the research.

According to the research report, the researchers tested Nightshade on Stable Diffusion’s latest models and on an AI model they trained themselves from scratch. After they fed Stable Diffusion just 50 poisoned images of cars and then prompted it to create images of the vehicles, the usability of the output dropped to 20%. After 300 poisoned image samples, an attacker using the Nightshade tool can manipulate Stable Diffusion to generate images of cars to look like cows.

Prior to Nightshade, Zhao’s research team also received significant attention for Glaze, a tool which disrupts the ability for AI image generators to scrape images and mimic a specific artist’s personal style. The tool works in a similar manner to Nightshade through a subtle change in the pixels of images that results in the manipulation of machine-learning models.

Read All at: https://www.artnews.com/art-news/news/new-data-poisoning-tool-enables-artists-to-fight-back-against-image-generating-ai-companies-1234684663/

By Karen K. Ho / 10/2023 ArtNews

Database of 16,000 Artists Used to Train Midjourney AI, Including 6-Year-Old Child, Garners Criticism

January 3rd, 2024

Database of 16,000 Artists Used to Train Midjourney AI, Including 6-Year-Old Child, Garners Criticism

For many, a new year includes resolutions to do better and build better habits. For Midjourney, the start of 2024 meant having to deal with a circulating list of artists whose work the company used to train its generative artificial intelligence program.

During the New Year’s weekend, artists linked to a Google Sheet on the social media platforms X (formerly known as Twitter) and Bluesky, alleging that it showed how Midjourney developed a database of time periods, styles, genres, movements, mediums, techniques, and thousands of artists to train its AI text-to-image generator. Jon Lam, a senior storyboard artist at Riot Games, also posted several screenshots of Midjourney software developers discussing the creation of a database of artists to train its AI image generator to emulate.

Read the entire article at: https://www.artnews.com/art-news/news/midjourney-ai-artists-database-1234691955/

Why AI Is Critical to Deep Space Exploration

April 27th, 2023

Why AI Is Critical to Deep Space Exploration

A new take on AI as "Food For Thought" episode 2

-- In conclusion, is Just Another Tool

Astronomers have confirmed that Earth-like planets orbit nearby stars, but they’re too far away for humans to reach. That's where artificially-intelligent robots come in. On this episode of AI IRL, Bloomberg's Nate Lanxon and Jackie Davalos are joined by astrophysicist Neil deGrasse Tyson, who explains the ways in which AI can shed light on the universe’s great unknowns, help us explore new planets and discover new galaxies. (Source: Bloomberg)
April 26th, 2023, 8:30 PM EDT

The mounting human and environmental costs of generative AI

April 13th, 2023

The mounting human and environmental costs of generative AI

Op-ed: Planetary impacts, escalating financial costs, and labor exploitation all factor.

Dr. Sasha Luccioni is a Researcher and Climate Lead at Hugging Face, where she studies the ethical and societal impacts of AI models and datasets. She is also a director of Women in Machine Learning (WiML), a founding member of Climate Change AI (CCAI), and chair of the NeurIPS Code of Ethics committee. The opinions in this piece do not necessarily reflect the views of Ars Technica.

Over the past few months, the field of artificial intelligence has seen rapid growth, with wave after wave of new models like Dall-E and GPT-4 emerging one after another. Every week brings the promise of new and exciting models, products, and tools. It’s easy to get swept up in the waves of hype, but these shiny capabilities come at a real cost to society and the planet.
Downsides include the environmental toll of mining rare minerals, the human costs of the labor-intensive process of data annotation, and the escalating financial investment required to train AI models as they incorporate more parameters.

Let’s look at the innovations that have fueled recent generations of these models—and raised their associated costs.

By: Sacha Luccione - Ars Technica 04/12/2023

Read entire article ➡️ https://rafaelsalazar.io/blogs/news/the-mounting-human-and-environmental-costs-of-generative-ai

Generative AI set to affect 300 million jobs across major economies

March 30th, 2023

Generative AI set to affect 300 million jobs across major economies

A Changing Labor Market ~

The latest breakthroughs in artificial intelligence could lead to the automation of a quarter of the work done in the US and eurozone, according to research by Goldman Sachs.

The investment bank said on Monday that “generative” AI systems such as ChatGPT, which can create content that is indistinguishable from human output, could spark a productivity boom that would eventually raise annual global gross domestic product by 7 percent over a 10-year period.

But if the technology lived up to its promise, it would also bring “significant disruption” to the labor market, exposing the equivalent of 300 million full-time workers across big economies to automation, according to Joseph Briggs and Devesh Kodnani, the paper’s authors. Lawyers and administrative staff would be among those at greatest risk of becoming redundant.

They calculate that roughly two-thirds of jobs in the US and Europe are exposed to some degree of AI automation, based on data on the tasks typically performed in thousands of occupations.

Most people would see less than half of their workload automated and would probably continue in their jobs, with some of their time freed up for more productive activities.

In the US, this should apply to 63 percent of the workforce, they calculated. A further 30 percent working in physical or outdoor jobs would be unaffected, although their work might be susceptible to other forms of automation.

But about 7 percent of US workers are in jobs where at least half of their tasks could be done by generative AI and are vulnerable to replacement.

Goldman said its research pointed to a similar impact in Europe. At a global level, since manual jobs are a bigger share of employment in the developing world, it estimates about a fifth of work could be done by AI—or about 300 million full-time jobs across big economies.

The report will stoke debate over the potential of AI technologies both to revive the rich world’s flagging productivity growth and to create a new class of dispossessed white-collar workers, who risk suffering a similar fate to that of manufacturing workers in the 1980s.

Goldman’s estimates of the impact are more conservative than those of some academic studies, which included the effects of a wider range of related technologies.

A paper published last week by OpenAI, the creator of GPT-4, found that 80 percent of the US workforce could see at least 10 percent of their tasks performed by generative AI, based on analysis by human researchers and the company’s machine large language model (LLM).

Europol, the law enforcement agency, also warned this week that rapid advances in generative AI could aid online fraudsters and cyber criminals, so that “dark LLMs…  may become a key criminal business model of the future.”

Goldman said that if corporate investment in AI continued to grow at a similar pace to software investment in the 1990s, US investment alone could approach 1 percent of US GDP by 2030.

The Goldman estimates are based on an analysis of US and European data on the tasks typically performed in thousands of different occupations. The researchers assumed that AI would be capable of tasks such as completing tax returns for a small business; evaluating a complex insurance claim; or documenting the results of a crime scene investigation.

They did not envisage AI being adopted for more sensitive tasks such as making a court ruling, checking the status of a patient in critical care, or studying international tax laws.

Published by Ars Technica - March 28, 2023
DELPHINE STRAUSS, FINANCIAL TIMES - 3/28/2023, 9:30 am


The generative AI revolution has begun, how did we get here?

February 26th, 2023

The generative AI revolution has begun, how did we get here?

A new class of incredibly powerful AI models has made recent breakthroughs possible.
HAOMIAO HUANG - 1/30/2023

Progress in AI systems often feels cyclical. Every few years, computers can suddenly do something they’ve never been able to do before. “Behold!” the AI true believers proclaim, “the age of artificial general intelligence is at hand!” “Nonsense!” the skeptics say. “Remember self-driving cars?”

The truth usually lies somewhere in between.

We’re in another cycle, this time with generative AI. Media headlines are dominated by news about AI art, but there’s also unprecedented progress in many widely disparate fields. Everything from videos to biology, programming, writing, translation, and more is seeing AI progress at the same incredible pace.

Mastodon and the pros and cons of moving beyond Big Tech gatekeepers

December 30th, 2022

Mastodon and the pros and cons of moving beyond Big Tech gatekeepers

As Elon Musk's Category 5 tweetstorm continues, the once-obscure Mastodon social network has been gaining over 1,000 new refugees per hour, every hour, bringing its user count to about eight million.

Joining as a user is pretty easy. More than enough ex-Twitterers are happy finding a Mastodon instance via joinmastodon.org, getting a list of handles for their Twitter friends via Movetodon, and carrying on as before.

But what new converts may not realize is that Mastodon is just the most prominent node in a much broader movement to change the nature of the web.

With a core goal of decentralization, Mastodon and its kin are "federated," meaning you are welcome to put up a server as a home base for friends and colleagues (an "instance"), and users on all instances can communicate with users on yours. The most common metaphor is email, where yahoo.com, uchicago.edu, and condenast.com all host a local collection of users, but anybody can send messages to anybody else via standard messaging protocols. With cosmic ambitions, the new federation of freely communicating instances is called "the Fediverse."

I started using Mastodon in mid-2017 when I faintly heard the initial buzz. I found that the people who inhabited a world whose first major selling point was its decentralized network topology were geeky and counter-cultural. There were no #brands. Servers were (and are) operated by academic institutions, journalists, hobbyists, and activists in the LGBTQ+ community. The organizers of one instance, scholar.social, run an annual seminar series, where I have presented.

The decentralization aspect that was such a selling point for me was also a core design goal for Mastodon and the predecessors it built upon, such as GNU Social. In an interview with Time, lead developer Eugen Rochko said that he began the development of Mastodon in 2016 because Twitter was becoming too centralized and too important to discourse. "Maybe it should not be in the hands of a single corporation,” he said. His desire to build a new system “was generally related to a feeling of distrust of the top-down control that Twitter exercised."

As with many a web app, Mastodon is a duct-taping together of components and standards; hosting or interacting with a Mastodon instance requires some familiarity with all of these. Among them, and the headliner at the heart of The Fediverse, is the ActivityPub standard of the World Wide Web Consortium (W3C), which specifies how actors on the network are defined and interact. Mastodon and ActivityPub evolved at about the same time, with Mastodon's first major release in early 2017 and ActivityPub finalized as a standard by the W3C in January 2018. Mastodon quickly adopted ActivityPub, and it has become such a focus of use that many forget that ActivityPub is usable in many contexts beyond reporting what users had for lunch. Like Mastodon, ActivityPub represents a rebellion against an increasingly centralized web. Christine Lemmer-Webber is the lead author of the 2018 ActivityPub standard, based on prior work led by Evan Prodromou on another service called pump.io. Lemmer-Webber tells Ars that, when developing the ActivityPub standard, "We were like the only standards group at the W3C that didn't have corporate involvement... None of the big players wanted to do it." She felt that ActivityPub was a success for the idea of decentralization even before its multi-million user bump over the last few months. "The assumptions that you might have, that only the big players can play, turned out to be false. And I think that that should be really inspiring to everybody," she said. "It's inspiring to me."

Standards setting

The idea of an open web where actors use common standards to communicate is as old as, well, the web. "The dreams of the 90s are alive in the Fediverse," Lemmer-Webber told me. In the late '00s, there were more than enough siloed, incompatible networking and sharing systems like Boxee, Flickr, Brightkite, Last.fm, Flux, Ma.gnolia, Windows Live, Foursquare, Facebook, and many others we loved, hated, forgot about, or wish we could forget about. Various independent efforts to standardize interoperation across silos generally coalesced into the Activity Streams v1 standard.

➡︎Read the complete article @ https://arstechnica.com/gadgets/2022/12/mastodon-highlights-pros-and-cons-of-moving-beyond-big-tech-gatekeepers/

How to write an image description for Social Media

December 24th, 2022

Please read complete article enclosed in the link.

Extremely informative writing for those of us who share our work on different Social media platforms.
Found on Mastodon post by
https://toot.cat/@StephanieZvan/109564832289331290

By: Alex Chen | Published in UX Collective

I wrote this how-to guide with the immensely helpful counsel and insights from Bex Leon and Robin Fanning, as well as through an online survey of Blind / low vision / visually impaired people.

What is an image description?
An image description is a written caption that describes the essential information in an image.
Image descriptions can define photos, graphics, gifs, and video — basically anything containing visual information. Providing descriptions for imagery and video are required as part of WCAG 2.1 (for digital ADA compliance).
It’s sometimes referred to as alt text since the alt attribute is a common place to store them.
While alt text and image descriptions are sometimes used synonymously, they’re not actually the same thing. Alt text refers to the text specifically added to the alt attribute, and is often short and brief. Image descriptions can be found in the alt text, caption, or body of the webpage and are often more detailed. For more about alt text and image descriptions, check out @higher_priestess on instagram.

Additionally, image descriptions are a gesture of care and an essential part of accessibility. Without them, content would be completely unavailable to Blind/low vision folks. By writing image descriptions, we show support of cross-disability solidarity and cross-movement solidarity.

How to write a good image description

Object-action-context
Something that I learned from talking to Bex is that there is a storytelling aspect to writing descriptions. It doesn’t necessarily make sense to go from left to right describe everything in an image because that might lose the central message or create a disorienting feeling. For that reason, I came up with a framework that I recommend called object-action-context.
The object is the main focus. The action describes what’s happening, usually what the object is doing. The context describes the surrounding environment.
I recommend this format because it keeps the description objective, concise, and descriptive.

It should be objective so that people using the description can form their own opinions about what the image means. It should be concise so that it doesn’t take too long for people to absorb all the content, especially if there are multiple images. And it should be descriptive enough that it describes all the essential aspects of the image.

 

Displaying: 1 - 10 of 82

  |  

Show All

  |

[1]

2 3 4 5 6 Next