Goseeko blog

AI platforms I use and how I use them

by Sanjay Mukherjee

Beyond the fun and experimentation, I use generative AI for a range of different work purposes. A third of my work involves creative skillsets – visualisation, design, illustration, animation, image editing, videography, layout and typography, sound design, audioscapes, among others. 

Typically, my work process involves: Creation (a manual process), recording the creation (manual + software), designing (manual + software-enabled), development (software-driven). There are many sub-processes involved in each main process when done manually or in a manual+software mode and many of these subprocesses are really repetition of tasks or steps. This is where I lean on generative AI since it does not directly impact or compromise my ability to create or design but drastically cuts down time. Let me explain with an example. 

I have been designing and making videos for education, learning, music and short films  for quite a few years. Till 2020 making a 3-minute video involved anything from a few weeks to a month depending on availability of equipment and people with the right skills.

Typically, I wrote the scripts, storyboards with all production and programming notes including visual, sound and moving components. Then photographers, videographers, graphic designers, artists, etc would be required to develop the components and finally production assistant or at least video editors would be required to bring it all together and produce the final output. The time was mainly spent (wasted I always felt) in communicating the vision to different sets of people in a language they understood (graphic designers think and communicate visually, sound people speak in frequencies and technical jargon) and so on.

The Traditional Creative Workflow: Manual and Time-Intensive

a couple of years ago, just post-COVID19, I had to create a music video when face-to-face interactions was still a health risk. I took an entirely different approach deciding to use only digital art. And I created the digitial art from photographs shot on an iPhone. I used the iColorama app to create the digital artworks, Adobe Photoshop for resizing and standardising the technical quality of the art assets and then iMovie for creating the actual video. In terms of effort, it took me 16 hours in all from creating the artworks to the photo editing to video creation and publishing. (The video eventually utilised 77 pieces of digital art). 

If I had to create 77 piece of visuals with illustrators or graphic designers, the artwork itself would have taken at least 154 hours (assuming 2 hours per artwork). The video production, music and sound design etc would have taken anything between 16 to 32 hours including iterations, edits, etc. 

The Turning Point: Going Fully Digital Post-COVID with Artificial Intelligence

Today, I can create (all by myself) a 2-minute video within 2 hours, with at least 2-3 options for each 5-second clip. That is how efficient generative AI can be. I have created 10-minute films within the day. The main cost involved is a paid subscription to the appropriate AI platform. For videos, I use a combination of ChatGPT (to assist in translating my storyboard into a format that Sora would easily work with) and Sora (for the video generation). And since both are from OpenAI, a single subscription works well.

Today, I can create (all by myself) a 2-minute video within 2 hours, with at least 2-3 options for each 5-second clip. That is how efficient generative AI can be. I have created 10-minute films within the day. The main cost involved is a paid subscription to the appropriate AI platform. For videos, I use a combination of ChatGPT (to assist in translating my storyboard into a format that Sora would easily work with) and Sora (for the video generation). And since both are from OpenAI, a single subscription works well.

So what are the different AI platforms I use? Here is a partial list (I use a few others off and on for specific projects, plus others that I test and evaluate from time to time):

I use ChatGPT and Claude for brainstorming and research most of the time. Gemini, CoPilot come into play when I am in Google or Microsoft ecosystems, Stability, I usr to look for third opinion if I need to. For images, I use MidJourney, Leonardo or DALL.E within ChatGPT depending on the type of image I require. For videos, I use Sora and some times Runway. Jasper I use sparingly for marketing or business campaigns. Otter is my go-to tool for transcribing audio. For designing apps, I use Grok for coding assistance.

I use several other software that are AI-assisted or have AI components built on top of traditional high-quality process. ToonCamera for visual effects, the Creative Cloud suite including Firefly, PremierPro, InDesign, Audition from Adobe. 

I use software and AI as tools or assistants to do the repetitive work that does not take away from my skills. For example, I do analysis of data manually but I use charts and graphs to present it to others (audience). I do my own proofreading and editing (no spellchecks or grammar reviews) since that is a good way to maintain language skills. And the bulk of my work is still done with basic tools – writing (or typing), drawing, sketching of designs, first draft of texts, recording of tunes, dialogues, photography of backgrounds and references, video clips of references or footage etc. 

Using AI as an Assistant, Not a Replacement

In our own journey of self-development, it is important to remember that speed comes from doing something again and again while accuracy comes from exercising judgment regularly. It is perfectly beneficial to delegate (to a person or AI) tasks that are repetitive in nature, but it can be detrimental to delegate tasks that are part of core skills because if we fall out of the bait of doing core skills, our skills will erode and eventually it il become difficult to do those tasks. Whether we use AI to do our work or to assist in our work is up to us. 

I also structure my usage of AI tools based on the lifecycle of each project. In the ideation phase, ChatGPT and Claude help me stretch and refine concepts. Their differing styles offer contrast—ChatGPT often helps structure chaotic thoughts while Claude brings nuance and breadth. When scripting or outlining a course or video, I’ll sometimes begin with voice notes that I transcribe using Otter. This speeds up the transfer from idea to editable text. For music and audio cues, I sometimes test audioscapes with AI-generated ambient sounds or loops to hear variations quickly before moving to more nuanced human production.

In the execution phase, I often use MidJourney or Leonardo to generate visual options that act as mood boards or story prompts. These visuals don’t always end up in the final work, but they accelerate visual thinking and can unblock creative paralysis. DALL·E, which is accessible within ChatGPT, becomes particularly helpful when I want a quick modification or a concept sketch while brainstorming interactively.

When working with longer-form video or app development, I sometimes toggle between Grok and CoPilot depending on the language or framework I’m using. This cross-checking enhances both reliability and learning. I don’t just use AI for automation but also for stimulation—posing questions, generating alternative interpretations, or simulating user flows.

Testing is another area where AI helps. Whether it’s checking UI contrast using plugins or getting feedback on tone or phrasing via different AI personas, the ability to generate multiple perspectives rapidly is invaluable. Importantly, none of this replaces the human feedback loop. I still review outputs with trusted collaborators. AI helps me think faster and broader, not lazier. The aim isn’t perfection—but meaningful acceleration and augmentation of human creativity.

study AI with Goseeko: https://www.goseeko.com/landing/international-students/

You may also like