Adobe's newest generative AI experiment, Project Music GenAI Control, unveiled at the Hot Pod Summit in Brooklyn, aims to empower users to create and customize music effortlessly, even without professional audio expertise. This prototype tool enables music generation via text prompts and seamless editing within the same interface. Users can input text descriptions to generate music in various styles, such as "happy dance" or "sad jazz," and then fine-tune the results using integrated editing controls. These controls allow adjustments to repeating patterns, tempo, intensity, and structure, with options to remix sections and generate looping audio. Additionally, the tool can modify audio based on reference melodies and extend clip lengths for specific needs like animations or podcasts. While details on the user interface for editing are pending, Adobe confirms the collaboration with academic institutions like the University of California and Carnegie Mellon University for Project Music GenAI's development. This early-stage experiment holds promise for future integration into Adobe's editing software, though no release date has been announced. Interested users can follow its progress on the Adobe Labs website.