Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Thursday, February 29, 2024

Adobe unveils a generative AI tool poised to revolutionize music creation and editing

Adobe's newest generative AI experiment, Project Music GenAI Control, unveiled at the Hot Pod Summit in Brooklyn, aims to empower users to create and customize music effortlessly, even without professional audio expertise. This prototype tool enables music generation via text prompts and seamless editing within the same interface. Users can input text descriptions to generate music in various styles, such as "happy dance" or "sad jazz," and then fine-tune the results using integrated editing controls. These controls allow adjustments to repeating patterns, tempo, intensity, and structure, with options to remix sections and generate looping audio. Additionally, the tool can modify audio based on reference melodies and extend clip lengths for specific needs like animations or podcasts. While details on the user interface for editing are pending, Adobe confirms the collaboration with academic institutions like the University of California and Carnegie Mellon University for Project Music GenAI's development. This early-stage experiment holds promise for future integration into Adobe's editing software, though no release date has been announced. Interested users can follow its progress on the Adobe Labs website.

Wednesday, December 13, 2023

Meta’s AI for Ray-Ban smart glasses


Meta is introducing an early access test for the most advanced AI capabilities of the Meta Ray-Ban smart glasses. In a recent announcement, Meta revealed the upcoming rollout of its multimodal AI features, enabling the AI assistant to provide information based on what it sees and hears through the glasses' camera and microphones. Mark Zuckerberg showcased the update on Instagram, demonstrating tasks such as asking the glasses to suggest matching pants for a held shirt, translating text, and displaying image captions. The multimodal AI capabilities were first discussed by Zuckerberg in a September Decoder interview with The Verge's Alex Heath, where he mentioned users interacting with the AI assistant throughout the day for various inquiries. The AI assistant, as shown by CTO Andrew Bosworth, can accurately describe objects and assist with tasks like captioning photos and language translation. During the initial test phase, access will be limited to a small number of users who opt in, and the trial will be restricted to the United States.