GenAI2.10 for producers: INTERACTIVE & GAME DEVELOPMENT
Player One, Meet AI: A Revolution in Game Interaction
AI’s transformative power is no longer just a futuristic promise—it’s already revolutionizing how we create and experience interactive content and games. For digital media producers, especially in animation, VFX, and emerging media, generative AI (GenAI) tools bring exciting opportunities. From generating immersive game worlds to creating realistic character models and interactive avatars, these tools redefine the creative process, speed, and possibilities of development.
The Rise of AI in Interactive and Game Development
At Mechanism Digital, we’ve experienced firsthand how GenAIis reshaping workflows. AI tools can create intricate backgrounds, produce 2Dand 3D models, and generate highly accurate textures in a fraction of the time it used to take manually. Motion capture, for instance, no longer requires expensive setups—it can be done using just a smartphone. Tools like Wonder Studio and Meshcapade seamlessly blend motion capture data, opening up new doors for interactive content with minimal friction.
Customizing Interactive Experiences
Interactive games powered by GenAI offer personalization like never before. AI-driven interactive avatars, for instance, can be tailored to suit unique player preferences, making gameplay experiences more engaging. With the ability to generate immersive environments and dynamically adapt them to user input, game development is becoming more about creativity and less about constraints. This flexibility allows for niche games that appeal to specific audiences, giving rise to more diverse, rich, and personalized content.
Challenges and Opportunities
Despite the rapid progress, some challenges remain.AI-generated 3D models often come with baked-in lighting, limiting flexibility in animation or game scenes. Additionally, while GenAI accelerates production, there’s still a need for human oversight and refinement. Balancing creative control with automation is key to leveraging these tools effectively for interactive experiences.
As AI tools mature, producers of digital media can expect an even greater impact—reducing costs, enhancing creativity, and driving faster innovation. The future of interactive media, fueled by AI, is here, and it’s only just beginning.
Archived recordings at this link
6/28/24 GenAI for Images
7/12/24 GenAI for Video
7/26/24 GenAI for Design
8/9/24 GenAI for 3D
8/23/24 GenAI for Audio
9/6/24 GenAI for Editing
9/20/24 GenAI for Post-Production & VFX
110/4/24 GenAI for Storytelling
Friday 10/18/24 GenAI for Marketing
Friday 11/1/24 GenAI for Animation
Friday 11/15/24 GenAI for Game Development
Friday 12/6/24 GenAI for Producing
Friday 12/20/24 GenAI for Emerging Media & the Future
GenAI2.9 for Producers: ANIMATION
Revolutionizing Animation with GenAI Tools
Animation, long a blend of art and tech, is now evolving rapidly thanks to Generative AI (GenAI). As digital media producers explore AI-powered tools, the possibilities for creating dynamic visuals have expanded like never before. At Mechanism Digital, we’ve embraced AI-enhanced animation across all production phases, enhancing our capabilities in design, motion, and finishing. By leveraging these tools, we’ve found new ways to push creative limits while meeting the unique demands of animation and VFX.
Our journey with AI tools has brought us to some impressive applications, from AI-enhanced animatic generation to complex motion capture that previously required extensive setup and specialized equipment. Today, tools like Luma Labs and Wonder Dynamics provide ways to convert images to video, offering digital artists powerful and accessible options. With GenAI, 2D and 3D designs come to life through automated in-betweening, lip-syncing, and dynamic effects that would be impossible to achieve within standard timelines and budgets.
While these tools simplify tasks and amplify creativity, they’re also blurring the line between traditional animation and AI. Using GenAI to manipulate and create visual elements feels less like assembly and more like an organic expansion of what’s possible in animation. As GenAI capabilities continue to grow, animators are finding new ways to experiment, unlocking ‘happy accidents’ that make each project unique. Whether it’s creating realistic character movements or generating complex background effects, GenAI is a game-changer, offering producers the chance to create faster, smarter, and more collaboratively than ever before.
Archived recordings at this link
6/28/24 GenAI for Images
7/12/24 GenAI for Video
7/26/24 GenAI for Design
8/9/24 GenAI for 3D
8/23/24 GenAI for Audio
9/6/24 GenAI for Editing
9/20/24 GenAI for Post-Production & VFX
110/4/24 GenAI for Storytelling
Friday 10/18/24 GenAI for Marketing
Friday 11/1/24 GenAI for Animation
Friday 11/15/24 GenAI for Game Development
Friday 12/6/24 GenAI for Producing
Friday 12/20/24 GenAI for Emerging Media & the Future
GenAI2.8 for Producers: MARKETING
AI in Action: Tools to Elevate Your Media Marketing Game
In today's fast-paced world of animation, visual effects, and emerging media, the ability to adapt quickly is essential. Producers are now tapping into GenAI tools, which are revolutionizing the way content is created and delivered, particularly in marketing. As producers, especially those working in digital media, staying on top of these technologies is crucial for both efficiency and creativity. GenAI for marketing offers innovative solutions that help automate processes, personalize content, and engage audiences in ways we’ve never seen before.
From generating videos with dynamic captions to tailoring personalized marketing messages using AI-driven personas, GenAI is making it easier for small studios and large corporations alike to create content that connects. For instance, tools like Opus Pro and Canva are streamlining social media outputs, helping producers resize and reformat content across platforms like LinkedIn, Instagram, and TikTok with minimal effort. These tools don’t just save time—they also maintain the professional quality needed to stand out in crowded digital spaces.
Moreover, AI is reshaping how we approach consumer engagement. By utilizing personalized video and email content, producers can deliver a more interactive and tailored experience to their audience, leveraging data-driven insights. In a world where content is king, AI is helping to crown producers as masters of marketing by enabling them to deliver smarter, faster, and more captivating campaigns. And as these tools continue to evolve, producers of animation, visual effects, and emerging media can look forward to even more powerful capabilities on the horizon.
Archived recordings at this link
6/28/24 GenAI for Images
7/12/24 GenAI for Video
7/26/24 GenAI for Design
8/9/24 GenAI for 3D
8/23/24 GenAI for Audio
9/6/24 GenAI for Editing
9/20/24 GenAI for Post-Production & VFX
110/4/24 GenAI for Storytelling
Friday 10/18/24 GenAI for Marketing
Friday 11/1/24 GenAI for Animation
Friday 11/15/24 GenAI for Game Development
Friday 12/6/24 GenAI for Producing
Friday 12/20/24 GenAI for Emerging Media & the Future
GenAI2.7 for Producers: STORYTELLING
From Concept to Cut: GenAI Powers Storytelling
As producers in digital media, especially those working in animation, visual effects, and emerging media, we constantly look for ways to streamline storytelling without sacrificing creativity. Generative AI (GenAI) tools have become an exciting addition to our production toolkit, offering us the ability to craft engaging narratives with greater efficiency. At Mechanism Digital, we’ve embraced these AI technologies across every stage of the storytelling process, from conceptualizing scripts to designing characters, animating scenes, and adding the final visual effects.
One of the standout applications of GenAI is in *storyboarding* and *concept development*. AI-powered tools can swiftly generate storyboards that help visualize scenes and camera movements, speeding up the process from initial idea to production-ready materials. These tools not only save time but also open new doors for creative exploration, generating ideas that might not have been considered otherwise.
In animation and VFX, we use GenAI to support the development of complex characters and scenes. With AI-assisted *image creation* and *video generation*, we can bring abstract concepts to life faster than ever, whether it's adjusting a character’s expression or creating an entire digital environment. Even though AI tools are highly advanced, the creative input of experienced producers remains crucial. We guide these tools to meet the narrative vision, ensuring that the final output resonates with the intended story.
As these GenAI tools evolve, we can expect even more robust capabilities for storytelling. However, it’s important to remember that, while AI accelerates the technical process, human creativity remains at the heart of exceptional storytelling. The magic happens when experienced professionals harness AI’s power to enhance, rather than replace, the creative process.
Archived recordings at this link
Register at this link for the next chapter in our 13-part series on GenAI for Digital Media Producers.
Each :30 minute talk will be on a specific topic, followed by audience questions/discussion.
Registrants will receive access to recording & deck w/links.
6/28/24 GenAI for Images
7/12/24 GenAI for Video
7/26/24 GenAI for Design
8/9/24 GenAI for 3D
8/23/24 GenAI for Audio
9/6/24 GenAI for Editing
9/20/24 GenAI for Post-Production & VFX
110/4/24 GenAI for Storytelling
Friday 10/18/24 GenAI for Marketing
Friday 11/1/24 GenAI for Animation
Friday 11/15/24 GenAI for Game Development
Friday 12/6/24 GenAI for Producing
Friday 12/20/24 GenAI for Emerging Media & the Future
GenAI2.6 for Producers: VFX & Post-Production
Cutting-Edge Compositing: AI’s Magic in Post-Production
In the world of digital media production, especially for animation and visual effects (VFX), AI tools are nothing short of revolutionary. As the technology evolves, producers in the VFX space are finding themselves with a rapidly expanding toolkit. From automating time-consuming tasks like rotoscoping to generating entire scenes with a few prompts, GenAI is transforming post-production workflows.
At Mechanism Digital, we've been lucky enough to work with a variety of AI-powered tools. Adobe Firefly, Topaz Labs, and Comfy UI, among others, are helping us streamline the process of upscaling footage, refining details, and even creating brand-new elements for our projects. These tools don’t just save time—they unlock creative possibilities we hadn’t considered before.
One of the most exciting areas of development is in AI-assisted compositing. Depth maps and normals maps allow us to add realistic lighting and reflections to integrate elements into scenes seamlessly. Whether we're adjusting lighting in post-production or removing unwanted objects, the precision and speed AI brings are unmatched.
But it’s not all about automation. Creative control remains at the forefront, and combining traditional 2D and 3D methods with AI gives us the best of both worlds. With tools like Replicate and Flawless, we can fine-tune projects, ensuring that every detail, from facial expressions to environmental elements, aligns with the director’s vision. This hybrid workflow helps us stay in control while leveraging AI's power to amplify our efficiency.
As AI continues to revolutionize VFX and post-production, producers need to stay ahead of the curve. These tools are not just trends—they're shaping the future of digital media production, and now's the time to dive in.
Archived recordings at this link
Register at this link for the next chapter in our 13-part series on GenAI for Digital Media Producers.
Each :30 minute talk will be on a specific topic, followed by audience questions/discussion.
Registrants will receive access to recording & deck w/links.
6/28/24 GenAI for Images
7/12/24 GenAI for Video
7/26/24 GenAI for Design
8/9/24 GenAI for 3D
8/23/24 GenAI for Audio
9/6/24 GenAI for Editing
9/20/24 GenAI for Post-Production & VFX
110/4/24 GenAI for Storytelling
Friday 10/18/24 GenAI for Marketing
Friday 11/1/24 GenAI for Animation
Friday 11/15/24 GenAI for Game Development
Friday 12/6/24 GenAI for Producing
Friday 12/20/24 GenAI for Emerging Media & the Future
GenAI2.5 for Producers: EDITING
Lights, Camera, Automation! AI Tools for Editors on the Rise
In the ever-evolving world of digital media production, especially in animation, visual effects, and emerging media, the role of editing is rapidly transforming thanks to AI-powered tools. These Generative AI (GenAI) tools are not just augmenting the capabilities of editors but also revolutionizing the entire editing workflow. For producers, these advancements mean more efficient, streamlined, and creative possibilities in post-production.
Text-Based Editing: Streamlined and Intuitive
One of the standout developments in GenAI for editing is text-based editing. Tools like Adobe Premiere's transcription feature have simplified the editing process by converting audio tracks directly into text. Editors and even non-editors can now manipulate the text document to rearrange, cut, or paste parts of the narrative, which automatically translates into cuts on the video timeline. This innovation is particularly advantageous for quick rough cuts, making the editing process accessible to those who might not have extensive editing experience, such as directors or subject matter experts. It’s not just about reducing the time spent on repetitive tasks; it’s about enabling faster iterations and more creative control over the content.
Automated Cutdowns and Social Media Content Generation
GenAI tools like Opus Pro and InVideo are game-changers for producing bite-sized content tailored for social media. These tools automatically create multiple versions of videos in different aspect ratios, add dynamic captions, and even insert B-roll based on the context of the content. For producers, this means that what was once a labor-intensive process can now be achieved with a few clicks, freeing up time for more creative decision-making. Imagine generating a 30-second clip that captures the essence of a 10-minute video, complete with sentiment analysis and animated captions, all while maintaining the story's integrity. This is the future of editing where AI does the heavy lifting, allowing editors to focus on crafting the narrative.
Expanding Creative Horizons with AI-driven Avatars and Smart Tools
AI tools are also pushing the boundaries of creativity with features like AI-driven avatars and lip-sync capabilities, making it possible to add digital presenters or characters that perfectly mimic the lip movements of real human dialogue. Tools such as Synthesia enable seamless integration of these avatars into content, from marketing videos to interactive media. These avatars can be customized, making content more engaging and versatile, particularly for brands looking to personalize their digital presence. Meanwhile, smart cropping tools, such as those offered by Adobe and Pixel Hunter, ensure that faces remain centered in the frame, adapting videos for platforms like TikTok and YouTube.
Conclusion: Merging AI Innovation with Human Expertise
While GenAI tools are becoming indispensable in the editing suite, the role of the editor remains crucial. AI might handle the rough cuts, the technical aspects of sound matching, or even the auto-generation of short social media videos, but storytelling—the heart of any great production—still requires a human touch. Combining these innovative tools with the traditional skills of seasoned editors will pave the way for more dynamic, efficient, and compelling digital media production. For producers, embracing these AI tools is not just about staying ahead of the curve—it's about unlocking new creative possibilities.
Archived recordings at this link
Register at this link for the next chapter in our 13-part series on GenAI for Digital Media Producers.
Each :30 minute talk will be on a specific topic, followed by audience questions/discussion.
Registrants will receive access to recording & deck w/links.
6/28/24 GenAI for Images
7/12/24 GenAI for Video
7/26/24 GenAI for Design
8/9/24 GenAI for 3D
8/23/24 GenAI for Audio
9/6/24 GenAI for Editing
9/20/24 GenAI for Post-Production & VFX
110/4/24 GenAI for Storytelling
Friday 10/18/24 GenAI for Marketing
Friday 11/1/24 GenAI for Animation
Friday 11/15/24 GenAI for Game Development
Friday 12/6/24 GenAI for Producing
Friday 12/20/24 GenAI for Emerging Media & the Future
GenAI2.4 for Producers: AUDIO
The Sound of Tomorrow: How AI is Transforming Audio in Animation and VFX
In the ever-evolving landscape of digital media production, artificial intelligence (AI) has emerged as a game-changer, particularly in the realms of animation, visual effects, and emerging media. Among the most exciting advancements is the use of Generative AI (GenAI) for audio, a tool that is rapidly transforming how producers approach sound design, music creation, and voice synthesis.
At Mechanism Digital, we’ve been navigating the waves of technological innovation for nearly three decades, and AI is simply the latest tide to ride. GenAI for audio isn’t just another tool—it’s a revolution in how we think about sound in media. Whether it’s crafting a perfect score, creating realistic voiceovers, or generating sound effects from video, AI has streamlined processes that once took days or even weeks.
For example, tools like ElevenLabs and AudioCraft allow us to produce high-quality speech synthesis and sound effects with incredible customization. These tools can separate vocals from music, remix tracks, and even translate speech while maintaining the original emotional tone—impressive feats that enhance both efficiency and creativity in production.
One of the most remarkable aspects of GenAI for audio is its ability to generate music and soundscapes that are tailored to specific prompts. Whether you need a bluesy tune or a hard rock anthem, these tools respond to creative direction with surprising knowledge of even obscure genres. This not only opens up new creative possibilities but also democratizes the music production process, making it more accessible to those without formal training in music theory.
As we continue to explore and integrate these tools into our workflows, it's clear that AI is not just a trend but a vital component of modern media production. While stock music and traditional methods still have their place, the ability to generate custom audio content on-demand is becoming an invaluable asset. By leveraging our experience and embracing these innovative technologies, we can deliver more nuanced, high-quality content that resonates with audiences on a deeper level.
Archived recordings at this link
Register at this link for the next chapter in our 13-part series on GenAI for Digital Media Producers.
Each :30 minute talk will be on a specific topic, followed by audience questions/discussion.
Registrants will receive access to recording & deck w/links.
6/28/24 GenAI for Images
7/12/24 GenAI for Video
7/26/24 GenAI for Design
8/9/24 GenAI for 3D
8/23/24 GenAI for Audio
9/6/24 GenAI for Editing
9/20/24 GenAI for Post-Production & VFX
110/4/24 GenAI for Storytelling
Friday 10/18/24 GenAI for Marketing
Friday 11/1/24 GenAI for Animation
Friday 11/15/24 GenAI for Game Development
Friday 12/6/24 GenAI for Producing
Friday 12/20/24 GenAI for Emerging Media & the Future
GenAI2.3 for Producers: 3D
From Pixels to Polygons: AI Workflows in 3D Production
In the ever-evolving world of digital media, especially within animation and visual effects, the integration of AI is not just a trend but a transformative shift. At Mechanism Digital, we're embracing this shift, using AI to revolutionize our workflows and push the boundaries of what's possible in 3D production.
Generative AI tools, especially combined with 3D tools, are reshaping how we create and visualize content. Whether it's generating intricate 3D models from simple prompts or refining the textures and lighting of complex scenes, AI is becoming an indispensable tool for producers and artists alike. These tools are not just speeding up the production process but are also introducing new workflows that allow for more creative freedom and precision.
For instance, diffusion models, a cornerstone of current AI advancements, enable us to create detailed 3D models from both 2D images and text prompts, offering a new level of depth and realism to our projects. This technology has opened doors to possibilities that were once time-consuming and cost-prohibitive, allowing us to bring high-quality visuals to projects with tighter budgets.
Moreover, tools like Luma Labs and Mesh AI are leading the charge in AI-driven 3D model generation, each offering unique capabilities that cater to different aspects of production. Whether it's creating low-poly models for background assets or generating highly detailed hero models, these AI tools are proving to be game-changers in the industry.
As AI continues to evolve, so do the opportunities it presents. We're on the cusp of a new era in digital media production, where AI is not just an assistant but a creative partner, helping us to tell better stories with more efficiency and flair. The future of 3D in animation and VFX is here, and it's being driven by the power of AI.
Archived recordings at this link
Register at this link for the next chapter in our 13-part series on GenAI for Digital Media Producers.
Each :30 minute talk will be on a specific topic, followed by audience questions/discussion.
Registrants will receive access to recording & deck w/links.
6/28/24 GenAI for Images
7/12/24 GenAI for Video
7/26/24 GenAI for Design
8/9/24 GenAI for 3D
8/23/24 GenAI for Audio
9/6/24 GenAI for Editing
9/20/24 GenAI for Post-Production & VFX
110/4/24 GenAI for Storytelling
Friday 10/18/24 GenAI for Marketing
Friday 11/1/24 GenAI for Animation
Friday 11/15/24 GenAI for Game Development
Friday 12/6/24 GenAI for Producing
Friday 12/20/24 GenAI for Emerging Media & the Future
The Good and Bad of AI in Animation and VFX
Challenges and Opportunities: Adapting to AI in the Animation Industry
Since founding Mechanism Digital in 1996, I've witnessed many tech changes in the visual effects and animation industry. Some innovations, like 3D animation, have really changed how we tell stories, while others have helped with daily tasks without causing a major shake-up. However, artificial intelligence (AI) and machine learning are the biggest leaps I've seen, changing our industry in ways we couldn't have imagined.
AI's impact has been quick and deep, affecting not just visual effects and animation, but also writing, voiceover, and music production. Technology changes faster each year, and predicting what will happen in the next five to ten years is hard. But AI will definitely be both a challenge and an opportunity for the creative industries. It will drive innovation but also raise important questions about the future of human creativity and jobs.
As a business owner, I've always been eager to try out new tools. This curiosity has led us to beta test early 3D software, compositing plugins, augmented reality apps, virtual reality, interactive real-time game engines, and now AI. This article explores AI's role in visual effects, animation, emerging media, and post-production.
Design and Brainstorming –
One of the main ways our studio uses AI is in the design phase, where it acts like a creative partner. Design is crucial and often takes up 25% of a project's time and budget. In this phase, we ask clients many questions to understand their vision to help us focus a project's direction. AI helps by generating different ideas quickly, letting us explore various directions. It can also inspire unexpected ideas, blending styles creatively. For example, mixing a cowboy theme with 80s new wave elements. If an idea doesn't work, we prompt AI for a new one and quickly refine our approach. However, AI can sometimes suggest ideas that are too expensive for the client. It's important for us to guide the direction and not present options beyond the client's budget. Another drawback is AI's random nature, making it hard to produce consistent styles from shot to shot. We've learned to guide AI using traditional digital tools, like hand-drawn sketches or simple 3D layouts, to maintain control over composition and imagery.
Populating Backgrounds of 3D Scenes -
AI can easily create low-detail textured 3D models, although these models are often best used for background elements due to limitations producing very specific requests or technical issues like baked-in lighting and inflexible UV layouts. One of our favorite uses is GenAI can produce beautifully detailed digital matte paintings, and additional AI tools can add limited 3D depth to add subtle camera pan and zoom movements in compositing programs like Fusion, Nuke, and After Effects.
Rotoscoping and Animation -
Rotoscoping is another area where AI shows promise. AI can do an initial pass, creating a rough mask that artists can use for slap-comps and/or refine later. Eventually, AI will probably eliminate the need for shooting on green screen, reducing costs and time or simply adding flexibility to productions after they have been shot.
Surprisingly, we are now seeing tools to create or influence animation, beginning with simple tasks like animated logo builds. AI applies motion to elements based on shape and composition, which could be useful for non-animators or consumers to create basic animation for a logo. As an animation studio, these tools are not useful to us yet, but as AI evolves, I’m sure it will handle more complex animation.
Editing and Post-Production -
In editing scripted content, AI can build rough assembly edits from script supervisors' notes in minutes and tag shots automatically. For interview content, producers can highlight transcript sections, and Adobe Premiere will edit footage instantly, freeing editors to focus on creative tasks and story coherence. Additional tools will remove all the “ums” and pauses, then clean up background audio and even replace flubbed words.
Educational content and interview content can be automatically broken into sound bites, reformatted for different social media platforms, and uploaded automatically. Using these tools in a pipeline, AI services can monitor related articles/tags on social media channels, generate and tag topical content, and respond to trends within minutes by posting topical content. It can measure performance and adjust strategies for better engagement. Content creation studios, which are not usually involved in media buying and ad placement, can now use new tools to automate the posting and monitoring processes so created content won’t require a handoff to another agency.
Challenges and Opportunities -
AI offers a lot of potential but also brings challenges. Young animation artists must adapt to rapidly changing tools, and senior artists must guide juniors while learning new technologies themselves. Remote work further complicates training, highlighting the need for better teleconferencing tools.
AI has already been replacing some jobs in our industry, like background creation and early design development. This doesn’t mean a computer has taken over their job, but talented artists employing AI can now output better work faster, so the industry needs less artists. This shift is inevitable, and those affected must embrace new technologies to stay current. AI needs content experts to guide its output, ensuring quality and strategy alignment. In the short-term, experienced producers with a trained eye will benefit most, amplifying their design abilities. This will be a hurdle for junior artists entering the industry who haven’t yet developed their “eye”. I believe this imbalance will shift as new generations embrace AI in ways the current generation couldn’t have predicted.
Another downside of AI is its ease of use, which means more people can create content, but this also leads to a flood of mediocre work. It's as if alchemy was discovered and suddenly everyone could make gold, devaluing the precious metal and making it gaudy. I trust our use of AI's "alchemy" will mature and help us create useful materials, making us more productive instead of just showing off.
Embracing AI is essential for staying competitive in media content production. AI speeds up tasks, opens new creative possibilities, and democratizes content production. However, it needs guidance from experienced professionals to maximize its potential. By finding the intersection between personal expertise and AI, individuals can leverage their knowledge to navigate this evolving landscape and harness AI's power for innovative content creation.
GenAI2.2 for Producers: DESIGN
Unleashing the Power of Generative AI in Design for Digital Media
AI is revolutionizing the digital media production landscape, particularly in the design phase of projects. As producers of animation, visual effects, and emerging media, embracing generative AI tools can significantly enhance our creative processes. These tools not only streamline design phases but also open up new possibilities for innovative visual content.
Generative AI tools, such as Stability AI and Adobe Photoshop, offer powerful capabilities for creating intricate designs and visual effects. For instance, combining multiple styles through text prompts can yield unique and inspiring visuals. This is particularly useful in brainstorming sessions where clients seek fresh and imaginative concepts. By inputting a simple prompt, AI can generate an image of high heel shoes inspired by a strawberry garden and fantasy waterfalls, showcasing how AI blends various elements to produce stunning visuals.
Another exciting aspect is the use of AI in storyboarding and pre-production. Tools like Luma Labs and Dream Studio enable the creation of dynamic style frames and animatics. These AI-driven visuals help clients and creative teams visualize motion and scene composition early in the production process. This iterative approach ensures that the final product aligns closely with the envisioned concept, making pre-production more efficient and collaborative.
Moreover, AI's ability to generate variations of concepts, as seen with tools like Krea and Mage, allows for extensive exploration of design ideas. Whether it's adjusting lighting, adding elements, or creating entirely new scenes, these tools provide flexibility and precision in refining visual content. This iterative and collaborative design process ensures that the final output is not only visually captivating but also aligns with the project's creative vision.
A new breed of studios has emerged, for marketing & entertainment: where artists and AI experts work together using a combination of traditional (digital) tools and new pipelines which are often browser based.
In summary, the integration of generative AI in design for digital media production is a game-changer. It enhances creativity, speeds up workflows, and fosters collaboration between clients and creative teams. As AI technology continues to evolve, its role in design will only become more pivotal, offering endless possibilities for innovation in animation, visual effects, and emerging media.
Archived recordings at this link.
Register at this link for the next chapter in our 13-part series on GenAI for Digital Media Producers.
Each :30 minute talk will be on a specific topic, followed by audience questions/discussion.
Registrants will receive access to recording & deck w/links.
6/28/24 GenAI for Images
7/12/24 GenAI for Video
7/26/24 GenAI for Design
8/9/24 GenAI for 3D
8/23/24 GenAI for Audio
9/6/24 GenAI for Editing
9/20/24 GenAI for Post-Production & VFX
110/4/24 GenAI for Storytelling
Friday 10/18/24 GenAI for Marketing
Friday 11/1/24 GenAI for Animation
Friday 11/15/24 GenAI for Game Development
Friday 12/6/24 GenAI for Producing
Friday 12/20/24 GenAI for Emerging Media & the Future
GenAI2.1 for Producers: VIDEO
Video Alchemy: Turning Images into Motion with AI
Welcome back to our ongoing series on Generative AI, where today we dive into the thrilling world of AI-generated video. As producers of digital media, especially in animation, visual effects, and emerging media, it's crucial to stay ahead of the curve. Let’s explore some groundbreaking tools that are revolutionizing the industry.
Generative AI, or GenAI, is taking the world of digital media by storm. Just as image diffusion models have transformed static visuals, video diffusion models are now reshaping how we produce dynamic content. These models find patterns in vast datasets of video images and their metadata, allowing them to generate video clips that can be both imaginative and highly realistic.
One standout tool in this space is Runway. Known for its versatile capabilities, Runway's new video generation tool is garnering attention for its ease of use and impressive results. By simply inputting a concise prompt, users can generate short video clips that adhere to specific themes or visual styles. Runway's intuitive interface and powerful backend make it a favorite among digital media producers.
Another exciting player is Luma Labs with their Dream Machine. This tool excels in image-to-video conversion, where a single frame can be transformed into a fully animated sequence. The Dream Machine analyzes the input image, identifying elements and motion patterns to create seamless animations. It’s perfect for projects where maintaining visual continuity is crucial.
For those seeking more control, Comfy UI offers a node-based system that allows for intricate customization. While it requires a bit more technical know-how, the ability to manipulate every frame ensures that the final product aligns perfectly with the creative vision. It’s an excellent choice for producers who want to push the boundaries of what’s possible with AI-generated video.
The capabilities of these tools are continuously evolving, with new features and improvements being rolled out regularly. From enhancing resolution with Topaz Labs to creating engaging character animations with tools like ChatGPT and Stable Diffusion, the landscape is rich with possibilities.
In conclusion, AI-generated video is not just a futuristic concept but a present-day reality that’s enhancing production workflows and creative outputs. By leveraging these cutting-edge tools, producers can achieve remarkable efficiency and innovation in their projects. Stay tuned as we continue to explore the fascinating world of GenAI in our series.
All archived recordings at this link.
Register at this link for the next chapter in our 13-part series on GenAI for Digital Media Producers.
Each :30 minute talk will be on a specific topic, followed by audience questions/discussion.
Registrants will receive access to recording & deck w/links.
6/28/24 GenAI for Images
7/12/24 GenAI for Video
7/26/24 GenAI for Design
8/9/24 GenAI for 3D
8/23/24 GenAI for Audio
9/6/24 GenAI for Editing
9/20/24 GenAI for Post-Production & VFX
110/4/24 GenAI for Storytelling
Friday 10/18/24 GenAI for Marketing
Friday 11/1/24 GenAI for Animation
Friday 11/15/24 GenAI for Game Development
Friday 12/6/24 GenAI for Producing
Friday 12/20/24 GenAI for Emerging Media & the Future
GenAI2.0 for Producers: IMAGES
Art Meets Algorithm: The Future of Digital Media Production
In the dynamic world of digital media production, staying ahead means harnessing the latest in AI technology. Today, we're diving into the revolutionary world of AI tools that are transforming how producers create and manage images. These tools, driven by complex diffusion models, are making it easier than ever to generate stunning visuals that captivate and engage audiences.
Generative AI models like Midjourney, OpenAI's DALL-E, and Stable Diffusion are at the forefront of this transformation. These models work by recognizing patterns across enormous datasets of images and metadata, allowing them to generate new, unique visuals based on simple text prompts. This process mimics human creativity, enabling producers to experiment with different styles and concepts quickly and efficiently.
AI-driven image generation is not just about creating pretty pictures; it's about pushing the boundaries of what's possible in digital media. These tools can produce high-quality images in various styles, from hyper-realistic to stylized art forms, opening up new avenues for creative expression. Whether it's for a social media campaign, a storyboard for a film, or educational materials, these AI tools are revolutionizing the creative process.
All archived recordings at this link.
Register at this link for the next chapter in our 13-part series on GenAI for Digital Media Producers.
Each :30 minute talk will be on a specific topic, followed by audience questions/discussion.
Registrants will receive access to recording & deck w/links.
6/28/24 GenAI for Images
7/12/24 GenAI for Video
7/26/24 GenAI for Design
8/9/24 GenAI for 3D
8/23/24 GenAI for Audio
9/6/24 GenAI for Editing
9/20/24 GenAI for Post-Production & VFX
110/4/24 GenAI for Storytelling
Friday 10/18/24 GenAI for Marketing
Friday 11/1/24 GenAI for Animation
Friday 11/15/24 GenAI for Game Development
Friday 12/6/24 GenAI for Producing
Friday 12/20/24 GenAI for Emerging Media & the Future
Sora & Toys Я Us
Toying with AI: How Sora and Toys ' Я ' Us Are Changing the Game
Sora and Toys "R" Us have recently partnered along with Native Foreign to create a fascinating blend of AI and traditional filmmaking techniques in their new video. Debuting quietly on the Toys "R" Us page after being showcased at the 2024 Cannes Lions Festival, this collaboration exemplifies a savvy advertising strategy that leverages both brands' strengths to generate buzz and credibility.
The video, designed to be easily shareable, cleverly plays on the uncanny valley effect that often plagues AI-generated content. While some viewers noted the slightly eerie quality of the human characters, most agreed that the overall production quality was impressive, even rivaling that of Hollywood 3D humans. The voice-over, done by a real person rather than an AI, adds a layer of authenticity and warmth to the video, mitigating any robotic undertones.
One of the most interesting aspects of this video is the subtle disclaimer embedded in the content. While Sora is capable of generating realistic scenes and characters from text prompts, the video also incorporates visual effects (VFX) that enhance the final product. This combination of AI and human input underscores the potential for collaborative creativity in the digital age, where AI can handle initial concepts and humans refine the final output.
The production of the Sora and Toys "R" Us spot likely resembled the making of a documentary, where the narrative is often crafted in the editing room. The process begins with generating hundreds of shots, each capturing different scenes and moments that can inspire new directions and ideas. These shots are created out of order, much like the raw footage of a verité film or a reality show, providing a wide array of material for the editor to work with.
In the editing phase, the challenge lies in sifting through this extensive footage to construct a coherent and engaging story. The editor must pull together the most compelling and visually interesting shots, creating a sequence that feels exciting and dynamic, similar to a movie trailer that teases various concepts without revealing a full narrative. Non-sequitur shots, which might seem out of place in a traditional storyline, are acceptable here and can add to the overall intrigue and appeal.
To enhance continuity and storytelling, visual effects are employed. Techniques like adding particles and even 3D animation can help weave disparate shots into a seamless narrative. Once the visual framework is established, a script is written to provide structure and coherence. This script undergoes several rounds of editing and rewriting, aligning closely with the visuals to ensure a smooth flow. A voice-over is then added, bridging the gaps between scenes and eliminating the need for lip sync, while also lending a human touch that makes the entire piece more relatable and cohesive.
The strategic decision to include a downloadable option for the video means that it can be widely distributed by consumers, amplifying its reach without additional promotional efforts. This move is particularly clever given Toys "R" Us's shift from physical stores to a more digital presence. By allowing easy sharing, they tap into organic promotion, turning viewers into brand advocates.
Moreover, the partnership with Sora lends Toys "R" Us a modern, tech-savvy edge while granting Sora credibility through association with a beloved, family-friendly brand. This blend of old and new, human and AI, reflects a forward-thinking approach to advertising that could set a new standard in the industry over the next couple of years—a method that Mechanism Digital has fully embraced in its own production processes.
View the spot here: https://www.toysrus.com/pages/studios
GenAI for Content Producers
AI-Powered Creativity: A New Era for Content Producers
As content producers in the digital media landscape, the integration of Generative AI tools has opened up a realm of possibilities, transforming the way we create, innovate, and deliver content. These cutting-edge AI tools are not here to replace us but to enhance our capabilities, enabling us to produce stronger, more impactful work efficiently.
Generative AI for images has evolved remarkably, allowing us to generate high-quality visuals from simple text prompts. Tools like Adobe Firefly, Midjourney, and Stable Diffusion leverage advanced models to create stunning imagery while ensuring copyright safety. These tools recognize patterns from vast datasets, enabling us to guide AI to produce on-brand and on-strategy visuals. This is a game-changer for visual effects and animation companies, streamlining the brainstorming process and fostering creative evolution.
In video production, generative AI is making significant strides. While still in its early stages, tools like Runway and Astria are enabling us to create video content by blending images and prompts, thus generating frames that align with our vision. Although some issues, like frame flickering, persist, these tools are continuously improving, promising a future where video generation is seamless and efficient. This iterative process of creating video frames based on textual and visual inputs is revolutionizing the way we approach video content creation.
Moreover, AI tools are invaluable for post-production work. From enhancing image resolution using Topaz AI to relighting scenes and generating realistic 3D textures, these tools turbocharge our traditional workflows. They allow us to iterate quickly, refine outputs, and ensure that the final product aligns with our creative vision. The ability to generate 3D models and textures on the fly is particularly beneficial for animation studios, reducing the time and effort required to create detailed assets.
In summary, generative AI tools are empowering content producers to push the boundaries of creativity and efficiency. By embracing these technologies, we can elevate our work, deliver exceptional content, and stay ahead in the ever-evolving digital media landscape.
All archived recordings at this link.
Register at this link for the next chapter in our 10-part series on GenAI for Digital Media Producers.
Each :30 minute talk will be on a specific topic, followed by audience questions/discussion.
Registrants will receive access to recording & deck w/links.
6/28/24 GenAI for Images
7/12/24 GenAI for Video
7/26/24 GenAI for Design
8/9/24 GenAI for 3D
8/23/24 GenAI for Audio
9/6/24 GenAI for Editing
9/20/24 GenAI for Post-Production & VFX
10/4/24 GenAI for Digital Production
10/18/24 GenAI for Marketing
11/1/24 GenAI for Emerging Media and future trends in for creative industries
Drawing Insights
Unveiling Medical Mysteries with Whiteboard Wisdom
Whiteboard animations simplify the intricate world of medical knowledge, providing a cost-effective means for doctors to stay abreast of the latest in disease mechanisms and treatments. Crafted with care, these animations transform complex medical theories into friendly, accessible illustrations. Each animation begins with a detailed drawing, segmented into parts that are strategically revealed in sync with a Key Opinion Leader's (KOL) narration. This technique ensures that doctors can concentrate on one concept at a time, enhancing their understanding and retention of the material. The use of a hand-drawn approach not only makes the information more relatable but also allows for a gradual exploration of ideas, preventing the overwhelm that could come from presenting all details at once.
The economic advantage of whiteboard animations lies in their straightforward production process, which avoids the high costs associated with more labor-intensive 3D animations. By focusing on static images that are sequentially unveiled, these animations manage to convey critical medical advancements and research in a format that's both engaging and financially viable for sponsors, typically pharmaceutical companies. This approach not only facilitates the continuous education of medical professionals but does so in a manner that respects budgetary constraints. Consequently, whiteboard animations have emerged as a valuable tool in the realm of Continuing Medical Education (CME), enabling doctors to digest complex information effectively while fostering an environment of learning and growth within the medical community.
Examples of whiteboard animations.
Originally published: 02/12/2024
The [m]etaverse is here to stay
Facebook’s Metaverse is just one part of the larger metaverse.
Facebook did not invent the term “Metaverse.” It’s a term that was coined in a cyberpunk book called Snow Crash by Neal Stephenson in 1992. Neal was a visionary in predicting what technology would bring into fruition.
Facebook decided to take on the moniker “Meta” because it realized that the small-m metaverse is going to be huge for its own reasons. Meta is also a catchier name, and now whenever anyone talks about the small-m metaverse, they're effectively repeating Facebook's brand name. Facebook clearly wanted to hitch a ride on the Zeitgeist, but a messy roll out has made its Metaverse sort of a joke.
I think the small-m metaverse will be an interesting new way to communicate and have fun. More importantly, it will satisfy our constant craving to be more and more immersed in the media we consume — especially our nightly TV or movie watching. All of the enormous consumer TVs with peripheral vision are evidence of how much our viewing experiences are turning into immersive vacations from the real world.
The ultimate in immersion is yet to come, but we’ve seen a glimpse of it with Apple’s new, $3,500 Vision Pro headset. I applaud Apple for their mastery of engineering and materials, but I might hold off buying until it can do everything with lightweight frames that are indistinguishable from eyeglass frames and the price comes way down.
Much of the enthusiasm for the metaverse is coming from the workplace. The idea of avatars has already been taken up by Microsoft Teams, and Zoom has introduced avatars as well. I think a lot of people slogging through meetings in the one photogenic corner of their home may actually prefer to exist as an avatar against a fictitious background in the future.
Facebook was probably a little early in marketing their Metaverse — which, again, is part of the larger, small-m metaverse. The vendors don’t exist yet and, famously, the lower half of the avatars didn’t exist for a long time, either. Though the technology may not be quite ready, Facebook is sitting pretty on a great name.
Originally published: 08/31/2023
Can a Robot Write a Symphony?
AI Prompts have become a marketable art form
AI has democratized content creation with the ability to generate images from the “model” of images it was trained on. In the future — once the training models are big enough — anyone will be able to request a “new” film derived from existing content by asking AI something like:
“Recreate Casablanca, but in color, starring Leonardo DiCaprio and Natalie Wood, set in winter in a mountainous country, with a soundtrack by late-90s Aphex Twin."
Writing “prompts” like this will continue to become an art form in itself. Writing prompts always makes me think of Jodi Foster’s Dr. Arroway in Contact, when she is at a loss to describe the beauty she sees: “They should have sent a poet instead of a scientist.”
Popular media, as upvoted by humans, will rise to the top through crowdsourcing, i.e.: “likes.” These likes will continue to be heavily influenced by media companies — the way studios (or any business) spend on marketing, and similar to how record companies push their “commercial music” on radio (now Spotify, etc.) for financial ROI.
Music seems especially susceptible to AI fever, thanks to the short length of the media itself, and passionate, passionate fans. Despite it being available to them, I don’t believe the majority of people are interested in creating new content or even mash-ups with AI, but the masses will probably be happy to consume higher-quality AI-generated content.
Music (arguably the first art form created by humans) has always been one of the first mediums to leverage new technology:
- MIDI technical standard
- Digital files like .wav and .mp3
- Editing on a home laptop
- High-quality recording on a laptop
- Streaming
Although music hasn’t been in the news as much as imagery or chat, it will likely be at the vanguard of the AI revolution. Visual storytelling, being much more creatively and technically complex than music, will inexorably follow in music’s path to digital adulthood, as it has historically.
Jumping further into the future, it’s easy to imagine music becoming flooded with an infinite number of generated songs, or each of us can have one long eternal soundtrack generated on the fly, all based on the mood we’re in.
It truly seems like the only limit will be our imaginations.
I’ve written before about how the best results occur when humans and machines work together. Computers can't create art, but then again, most humans can't either. We should expect to feel threatened, even jealous, until we let go of our egos and learn to work with the machines. The 2004 film I, Robot provides a poignant example of this tension, as Will Smith's character inquires of the robot, "Can a robot write a symphony? Can a robot turn a canvas into a beautiful masterpiece?" To which the robot astutely replies, "Can you?"
Originally published: 05/05/2023
To Err Is to Be Experienced
Have you ever flashed film?
Very early in my career, I worked as a PA, or runner, on a couple of films. Our job was to run things back and forth between production offices and the set, like getting coffee; picking up and dropping off talent or props; and the myriad detailed logistics that have to happen for a shoot.
Once in my career, I was interviewing with a producer for a new project, and the producer asked me, "Have you ever flashed film before?"
Flashing film refers to accidentally exposing undeveloped film to light while it is still sensitive. This is why it’s called, “flashing film.” Like all roads to hell, flashing film usually occurs with good intentions. And a tight schedule.
A lot of effort is put into developing the film quickly so the director and producers can review the previous day’s footage. These “rushes,” or “dailies,” are meant to spot anything that might warrant a re-shoot. Even today, re-shoots need to be addressed as soon as possible to lock in the location, actors, wardrobe, etc., a second time.
Under pressure, a well-meaning crew member in the photo lab could have opened the wrong can of film. It would have been costly enough to flash blank film, but utterly devastating — to so many people — to have an entire day’s work literally be gone in a flash.
I immediately answered the producer’s question. "No, of course, I haven’t flashed film. Flashing film would be the worst thing in the world.”
And the producer said, "I’m sorry, but I can’t hire you — once you've flashed film, you never, ever flash film again. And this is not going to be the production that teaches you that lesson.”
I remember agreeing with him. He must have had some bad experiences in the past, or maybe he had heard some stories.
Film is much less common these days, but there are similar disasters that could devastate productions. Mistakes such as deleting digital files directly from the camera, or corrupting a hard drive during post-production, can erase hundreds of hours of work — potentially before any back-up has occurred.
Sometimes admitting past mistakes — and your commitment to never repeat them — can build more trust than trying to convince an employer that you have a spotless history. I’m grateful to this producer because he taught me an important lesson: Never having made a mistake is nothing to brag about.
Originally published: 02/22/2023
Price vs. Value
An early lesson in my trajectory to become an entrepreneur.
In the summer of 1979, I was twelve years old. School was out for a couple of more months before seventh grade started and there was a gas shortage affecting the whole country. It was typical for long, long gas lines to stretch back a mile or more.
I was living between SoHo and Little Italy here in New York City. One of my best friends, Peter, saw these gas-starved cars as a market. We decided to fill a cooler full of dry ice and cans of soda and sell them to the motorists sitting in their cars with the engine off. (Even if you had air conditioning, you didn't want to use up too much gas.)
The idea was to push a dolly up and down Houston Street (pronounced HOW-ston), which is a big street here in downtown New York. We learned that there was a soda distributor about half a mile away and bought sodas by the case. I remember they were about 15 cents a piece from the distributor.
I suggested that we sell them at 25 cents each because that was the best price that I had seen in vending machines. I figured everybody would be happy to purchase a cold soda at this great price and we'd make money by selling a lot of units. We sold out several times, necessitating half-mile walks to the distributor and back in oppressive heat.
When we did the financials, we realized we didn't really make a whole lot of money. Selling hundreds of cans at a 10-cent profit really didn't amount to very much income compared to all the work it required.
It took many more years later, probably even after starting in the computer animation and visual effects business, that I realized that we should have been charging what the sodas were actually worth. There was a lot of value that we brought to people, and it wasn't necessarily the refreshing taste of a dry-ice-cooled soda; customers were smiling.
In hindsight, if we had charged 50 cents for each of those sodas — which would've been a higher price for a soda in 1979 — we would've made a lot of money at the time. Perhaps fewer people would have bought, but it would have entailed a lot less work.
We would have been rewarded more for our idea and market innovation.
I think reflecting the value of our service in the price we were charging would have instilled me with a lot more appreciation for business — a nice little haul for two enterprising 12-year-old boys.
That was my first business. Here we are, 25 years later, I'm still working in my own business. There's still a side of me that always wants to make our work as inexpensive and cost-effective as possible — but I also need to keep in mind that it's really about value and the win-win. There’s a saying I like: “Price is what you pay, and value is what you get.”
Originally published: 11/16/2022
Forecast: A Summer of Growth
New positions promise to propel Mechanism Digital to the stratosphere.
Mechanism Digital is blossoming into summer with a couple new positions adding to expertise and efficiency. This is a great time to add new staff because the initial lag in film production created by Covid has recently resulted in a surge of high-quality post-production projects. Two features we are proud to be partnering with are; Eileen based on the novel from Ottessa Moshfegh and A Thousand And One directed by A. V. Rockwell.
Shepherding these film VFX projects is our new Head of Production, Julieta Gleiser who has experience working on blockbuster productions as well as animation and VR/AR space. This is a perfect skill-set-fit for our shop and we are super excited about Julieta’s breadth of experience and leadership.
After being on set and Mechanism supervising a half dozen projects through fall and winter, and knowing the flood of films was coming, we estimated having six months to find our new HOP. We went through close to a hundred resumes and conducted over 20 interviews with senior producers till we found Juli; exactly what we were looking for, a seasoned professional who was already plugged into the unique combination of industries that our studio specializes in.
For the past 25 years we’ve worked hard to fill new positions by promoting our own people from within although recruiting and hiring is a necessary investment as we grow the team. Before ads are placed, we ask if a new role is a natural progression of someone on our current staff or is this a completely new hat/position with a unique skill set. As a good example; Kathereena Singh, our seasoned bookkeeper was promoted to Head of Finance to focus her attention at higher levels, guiding the entire team forward with budgets and finance at the executive company level. Kat has now been with us for over 10 amazing years and recently announced that she will soon be leaving us. This was a bittersweet announcement. But we are happy for her and wish her the best in her new adventures.
I'm really proud to see our VFX team grow and the increased throughput of creative work we have handled lately. The big takeaway for me is that the most important hat I wear is to be the catalyst between what clients need and what our amazing team is capable of.
Originally published: 07/14/2022
Saving Money at Any Cost
Proper post-production pipeline practices produce fewer perilous pitfalls.
New York has a soft spot for independent films, and Mechanism Digital is proud to have worked on so many of them. First, some disambiguation: “Independent” does not always mean “low-budget.” Some independent films cost millions of dollars to produce, while others are financed on a shoestring. Mechanism Digital enjoys working on all types.
One particularly memorable experience was a passion project that had been financed by the life savings of the producer/director. In an effort to be budget conscious, the filmmaker decided to have the edit team perform the VFX turnover to our studio and also handle the conform, avoiding the cost of a dedicated finishing house.
- The turnover is a critical part of film making in which full quality footage is “pulled” and delivered to the effects studio for VFX integration.
- Once the effects are approved, the high-resolution versions are reconciled with the low-quality, “offline” edit proxies. Matching these two versions together is called the conform.
Halfway through working on 100 VFX shots the team realized completed vfx footage wasn’t matching in the conform. It turned out that the original pulls were exported using a different resolution and were color corrected to be linear (LIN) instead of the logarithmic format (LOG space) we typically receive.
Half the visual effects for the film had to be redone. As much as this filmmaker tried to save money and time— it ended up costing even more money and time.
No matter how much of a rush a post-production is in, whether it's a low-budget indie or big-budget feature, we insist on running at least one shot through the entire pipeline. We might add a very simple visual effect on the footage or nothing at all. Most importantly, we process this test footage “version 0” through all the steps from file transfer to rendering and even naming conventions and let the conform team re-import the footage and insert it back into the cut to confirm that everything works so we can “bless the pipeline.”
In the end, we helped save as much work as we could on the unfortunate VFX shots and completed additional work at a discounted price. Since that hard lesson a decade ago, Mechanism Digital has collaborated with the same filmmakers on several bigger budget films after that first project. It’s a perfect example how in the VFX business, relationships are everything.
Originally published: 04/01/2022
Opening Credits
Opening credits are an opportunity for directors to set the mood for the whole film.
Opening credits are a big deal in Hollywood.
As creatives, we see credits as a means to set a mood for the story with the choice of fonts, as well as the placement and composition of some transitions.
We recently worked on an opening credit sequence for a film called Moonshot. It's a romantic comedy about a woman who follows her boyfriend after he moves to Mars. The director was very knowledgeable about fonts, and he had a really clear idea of what he was looking for in order to complement the establishing footage of the film.
We were asked to explore several directions for fonts, which our design team presented as mood boards. In the end, we collectively agreed to go with a curved sans serif style for the titles, and a classy sans serif font for the credits. (One of our cardinal rules is to never use serif and sans serif fonts together.) The title font ended up being a little bit more stylistic, while the credits were in a very legible font to display all the key names and production companies.
We plan out the composition to avoid text appearing on top of important details in the footage (like people’s faces) and output static style frames. These frames show each credit/name over a frame of the opening sequence which has been timed out to be long enough to read while flowing as an integral design element within the overall sequence.
Once everything is decided, we put all this together into a PDF and send it to the client, followed by a live discussion led by our lead designer — in this case, Jason Groden. Once the film’s director gives us some feedback on the composition and size of these fonts, then we enter the motion phase.
The director of Moonshot wanted to have an interesting reveal for each credit, so Jason developed a custom distortion effect, which looks like a glitchy digital transmission with chromatic aberration — color around the edges like what happens in-camera. Of course, there is no real camera lens, so that effect was generated digitally. Whether or not it's factually accurate doesn't matter; its reason for being is to create the mood.
We worked on another film recently called Judas and the Black Messiah. It’s about the Black Panthers and was directed by Shaka King. It has a very different look and mood. The opening sequence consists of newsreel footage to illustrate how the Black Panthers were being portrayed at the time by the media. In this case, Shaka had an initial suggestion for a font when we first met, but asked us to go through a proper font exploration inspired by how news broadcasts looked in 1969. That said, he ultimately decided to go with his original font inspiration. (This is actually a common occurrence — for the initial, gut feeling to be right on the money.)
A major portion of credits is navigating politics and rules. In addition to the creative side of any production we need to make sure we follow rules and avoid causing legal ripples. There's a lot of industry politics; union rules; and contracts and waivers from the guilds. You can learn all about them in my next post.
Originally published: 02/25/2022
♪ Movie in the Sky with Clients ♫
Cloud-Based Visual Effects and Animation Production
Amazon Web Services (AWS) was in the news recently with an outage that affected Disney+ and Netflix. Mechanism Digital uses AWS, but our studio was not affected by the outage. In fact, our positive experience renting out AWS’s super fast computers is what I was writing about when the outage happened.
The first thing to know is that the biggest benefit, from an accounting perspective, is when the AWS machines are not being utilized. Renting processing power and speed is not unlike renting a car; it would be an expensive proposition to rent all the time, but it makes sense for sporadic or periodic use. Flexibility saves money which we then pass on to the client in the form of lower overhead costs which translates to lower bids to do the work.
The Amazon computers we connect to are probably hundreds of miles away, somewhere inland and dry. Like Microsoft’s cloud solution, called Azure, AWS can bill on an hourly, by-the-minute, and sometimes by-the-second basis depending on whether we are using Windows or Linux-based machines.
- Amazon’s computers and servers and networks are so much faster than what we had on premises that they sped up our workflow.
- We no longer need to continue to repair or buy new machines because Amazon always keeps their workstations upgraded.
- When we have a special need for particularly fast machines, with a click of a switch we can change from a 16-processor workstation up to a 64-processor workstation with 24 terabytes of RAM for rendering.
- We can get more iterations done in a day, and spend more time refining and improving the art with reduced waiting between creative clicks and tweaks.
Having computing power in the cloud has had both obvious and some not-so-obvious benefits during the pandemic.
With zero effort, we can spin up a hundred instances for artists all over the world, which was not possible when we only had 20 workstations at our studio in NYC.
We also found savings in not staffing a full-time IT person, since that is now mostly handled by Amazon. Expanding and reducing storage is much more flexible in the cloud, which is key, considering each frame is 50 megabytes and there are 24 frames in a second.
Our team does a lot of collaborative work with our clients, like going back and forth during the processes of building a model; texturing a model; rendering a model; and compositing it into a scene. Working remotely hasn’t really changed this process and for communications we still use the dedicated tools our clients know and love, like ShotGrid for presenting WIPs as well as organizing and relaying clear notes.
Now that our operations are streamlined and agile enough to continue production in the middle of a pandemic, we don’t see going back into the studio. In fact, many of the artists and producers prefer working remotely and have been able to get more work done without the distraction of an open workspace. We’re glad to be able to facilitate safer, and perhaps more productive, work solutions for our artists and our clients.
Originally published: 12/29/2021
Footnotes and Fair Balance
Exploring the intricate world of medical animation, where strict regulations, creative branding, and evolving technology converge.
In my last piece, I wrote about how the complexity of the body can be explained with animations — but that wasn’t the whole story. Within the medical animation space there are rules and regulations that govern much of the content, including those notoriously awkward lists of side effects in drug commercials.
Two major categories of marketing content produced by medical education agencies exist: “Branded” and “Unbranded.”
- Branded content is advertising for a specific pharmaceutical after it has been approved by the FDA. By law, branded content is required to be accompanied by Important Safety Information (ISI) and often has to meet “Fair Balance” requirements, which mandate an equal amount of information on the screen including possible adverse effects of the drug.
- Unbranded or “Promo” content is education-focused around disease awareness and new research but cannot mention a specific drug’s name, although these programs can be “brought to you by” the pharmaceutical company whose drug conveniently addresses the medical condition. These videos and animations must be accompanied by footnotes citing medical journal entries that support the medical points in the video.
Another related category of content is Continuing Medical Education, or CME, which tends to consist of live discussion panels of Key Opinion Leaders (KOLs). This content may be supported by static PowerPoint slides and doesn’t often use animated content, but we are starting to see that change with higher production values including 3D animation and virtual sets. As pharmaceutical companies race to get their products to market, it’s not uncommon that both branded and unbranded Mechanism of Action (MOA) and Mechanism of Disease (MOD) animated videos are being produced while the medication is still in the clinical trials pipeline so they can be shown immediately after approval. Because science messaging is subject to change at any time with new research during clinical trials, medical animation production needs to be ready to pivot at any time — fortunately, well planned 3D/computer generated imagery (CGI) has the benefit of flexibility as a story changes. That said, the benefits of a flexible or extended schedule need to be weighed against the realities of cost.
It may sound counterintuitive, but approvals are an ongoing process. In general, there are three phases that need legal approval: Script, animatic w/scratch voice over, and final delivery approval. Because the law requires so much attention to detail, agencies would be wise to work with an animation studio that understands the subtleties and idiosyncrasies of meeting medical/legal approval requirements. This will greatly reduce the agency’s need to hold the studio’s hand during initial costing and timing estimates — and, of course, through project delivery.
Originally published: 11/11/2021
VR: Venue Versatility
Digital booths find new life online with closure of medical congresses.
For the duration of the pandemic, healthcare congresses — normally hosting 50,000 attendees each — were shuttered. Doctors could not attend educational panels in person, resulting in the familiar trajectory of having all meetings over Zoom.
Historically, these conferences allowed doctors and other medical professionals to learn about new research and disease awareness efforts. Interactive experiences played a large role in the education process.
They say necessity is the mother of invention, so our team looked at public health restrictions as an opportunity to rethink in-person activities. The goal was to go live with a website where we could continue to develop compelling and educational experiences.
ExploreNoh is a good example of taking a physical booth and re-envisioning it as a virtual booth.
Hosted by a key opinion leader (KOL), visitors can explore a suite of presentations about a potentially serious condition. The tools for learning include:
- scientific animations;
- clinical surveys/quizzes;
- 360° immersive video; and
- downloadable brochures that summarize key information on the medical condition.
Participants on the web are presented with a virtual booth, which is inspired by the original four-sided booth design. The central attraction is a virtual reality (VR) interactive activity which simulates a fictional patient’s life in the “A Day in the Life” format. If a participant doesn’t have an Oculus Quest headset available, they can experience the 360° video by click-dragging a mouse around to see the patient’s environment. You can click on items that are significant to their medical condition to get more information about them. My favorite moment is when the patient looks into a mirror and the “player” realizes they are seeing themselves as a 65-year-old man. The 360° activity can also be experienced through a mobile device by looking around in “Magic Window” mode, which makes the device’s screen act as a portal into another world.
Fortunately, healthcare conferences are being planned as the world starts opening up again. This is great news for our in-person interactive projects — but with the positive feedback about our web-based content, we expect to continue to build out parallel experiences. That way doctors and other professionals who are not able to travel for whatever reason can still enjoy, learn, and share the content.
Originally published: 09/16/2021
Culturally Competent
In an industry of acronyms, medical animations must follow the letter of the law.
One of the challenges of being a medical animation vendor is the sheer number of rules and regulations. When medical education agencies produce Continuing Medical Education (CME) healthcare-centric videos or virtual reality (VR) experiences, they don’t have time to explain to animation studios the nuances and idiosyncrasies of the med-legal review (MLR) process and industry jargon. For instance, the Sunshine Act, aka section 6002 of the Affordable Care Act (ACA), delineates a very clear line between education and marketing and it is vital we don’t cross that line.
Producers who understand the Medical Legal Review (MLR) process require fewer rounds of revisions which means less time, labor, and cost for the agency on projects.
To be in compliance with the Sunshine Act, pharmaceutical companies employ Medical Science Liaisons, or MSLs, to raise awareness on new research and developments discovered through medical advancements, but MSLs are prohibited from mentioning the drug’s name. This education occurs primarily through sponsoring initiatives at medical congresses and trade shows. Healthcare professionals make the rounds at these events, visiting disease awareness booths of major companies like Pfizer, Merck, AstraZeneca, where MSLs staff the individual booths, and are very circumspect about maintaining the line between education and marketing.
The education component consists of breakout panels, where expert Key Opinion Leaders (KOLs), discuss any new findings about the mechanism of disease (MOD) or the drug’s mechanism of action (MOA).
One of the key components to educating healthcare professionals about MODs or MOAs is the use of animation. Animations are uniquely suited to bring the microscopic world to life, explain physiology, and the interactions between cells, proteins, and the drug’s pharmacological effect to treat a targeted condition. 2D or 3D animation is used to illustrate how drugs exploit a weakness to either keep pathogens from multiplying and/or teach the body to fend for itself.
Mechanism Digital produces animated videos, mixed reality experiences, and touchscreen/table activities. One of my favorite phases of a project is helping an agency choose which media type will be most fitting to tell their story and attract the largest number of visitors to a booth. Working across an array of technologies and formats, we develop visuals to support KOL explanations of MODs/MOAs to other healthcare professionals. One benefit interactive technologies have over passive animation is the ability to track metrics of how many visitors engaged with our apps and what their knowledge level or interest was. This in turn helps agencies inform messaging for marketing and sales departments to use in non-CME opportunities.
With COVID rates plummeting and vaccine rates soaring, resuming in-person conventions seems to be just around the corner — which means it will soon be time to dust off the VR goggles and really make a splash at medical conferences. In my next post I’ll be talking about a VR project we worked on to educate physicians about neurogenic orthostatic hypotension (nOH).
Originally published: 07/10/2021
Lessons learned during a year of remote healthcare video production.
#visualeffects #greenscreen #healthcarecommunication #healthcarecommunications #virtualproduction #zoom #motiongraphics #design
Mechanism Digital is proud to support the vital work of our healthcare communications clients and the Key Opinion Leaders who provide ongoing Continuing Medical Education to the healthcare community.
For years our artists have taken for granted our effective workflow of producing animated supporting graphics for talking heads shot on green screen, which allows us to replace backgrounds and maintain a consistent brand-look for our clients.
Until March 2020, when we were challenged with the closure of medical trade shows and green screen studios, with the sudden disruption in our pipeline, we needed to help our healthcare clients find creative ways to pivot and continue delivering essential information.
Shooting a new episode each day with a different doctor meant it was not feasible to ship a green screen, lighting, and A/V recording equipment to each KOL's home location. Using Zoom seemed like a logical approach, but Zoom's compressed video format meant keying wasn't going to be satisfactory, and rotoscoping was too labor-intensive for an hour+ of footage each week.
Our entire team launched into a solution-finding mode for a week. We researched and tested dozens of video recording options over the internet, using desktops and mobile phones. After scientifically quantifying our options, we ultimately decided to simply embrace the Zoom format knowing doctors wouldn't have extra time to download, setup, test, and troubleshoot new software. Instead of greenscreen backgrounds, we adjusted our graphics templates and embraced doctors in their home/office environments. Fortunately, Zoom has an option for recording locally, which has a higher quality than the streaming version. This also allows us to have a backup recording if there is a slowdown with someone's internet.
Remote producers work with doctors and suggest small changes to ensure even-lighting and reduce any background distractions. Our staff even produced a humorous video to provide doctors with "webcam tips, pants optional."
Simple changes kept videos on "brand" by merely placing doctors in the box, and the graphics have been set free.
Some remote recordings require a bit of audio sweetening, reducing echo/reverb and EQ to remove a specific frequency of hum/buzz, then add low-end back in to keep the doctor's voice from sounding too thin, then a final mastering to equalize levels across the content. We used to jump over to Adobe Audition for a richer set of audio tools, but we found Premiere's toolset works just fine, and staying in one software is more time-efficient for our quick turnarounds.
This year of remote production has been surprisingly successful, and a good lesson in the KISS principle of Occam's razor; the simplest explanation is usually the right one. We can continue to focus on coming up with smart solutions that look and sound great while delivering crucial educational content to healthcare providers and essential workers this past year.
Originally published: 03/10/2021
Mechanism Finds Creative Alternatives to Big Budget FX Spends
cost-saving solutions for filmmakers
Mechanism Digital is celebrating its 23rd year of producing VFX/CGI for New York’s film and television community and has been growing at a rapid clip with new hires and new locations.
“I’ve known MD’s President, Lucien Harriot, for several years and always admired MD’s work – they were just too busy doing it to tell anyone about it. So I look forward to bringing them into more and more projects, while of course continuing to serve the creative community as a whole through the PNYA.”
Stephanie McGann has been added as Production Coordinator, coming from a background as a pre-production coordinator on series for CBS, MTV, VH1 and the History Channel.
MD has teamed up with Blue Table Post in Brooklyn. “We know a lot of our clients live in Brooklyn, and more and more production and post is happening there, so it was a natural fit for us to have a footprint within a great facility like Blue Table.”
Projects of note include MD’s third season of TBS series The Detour starring Jason Jones, and hundreds of shots on an upcoming ABC series. And fingers are crossed for one of 2017’s highlights, The Big Sick, which is nominated for an Academy Award. Additionally, Come Sunday and Hereditary were at Sundance and Hereditary, Wild Nights With Emily and Galveston are headed to SXSW.
A compelling case study of how Mechanism leverages its years of experience into money-saving solutions for filmmakers is that of the comedy feature Sushi Tushi, Or How Asia Butted Into American Pro Football. The challenge was to use VFX/CGI to fill an empty pro football stadium with thousands of screaming fans, while staying within an indie budget.
“CGI audience creation and duplication through crowd tiling weren’t going to work, so we had to find a novel solution,” says Harriot. “So we worked together with Executive Producer/Writer Richard Castellane to come up with the idea to create animated, dreamlike sequences! And producer Robert Altman loved it. The tracking still had to be rock solid, but the keying and color matching were much more forgiving. This reduced the shot count down to 443, and kept the production level high and the cost down.”
“MD was a blast to work with,” said Altman. “From the planning meetings, to the background shoot at the Buffalo Bills Stadium, all the way through delivery. Lucien has a great team.”
Originally posted in the Post NY Alliance Newsletter
Originally published: 03/15/2018
The Big Sick's Invisible VFX
Invisible VFX for Films
The R-rated relationship dramedy directed by Michael Showalter & produced by Judd Aptow was the biggest deal of last year's Sundance Film Festival, purchased by Amazon Studios for $12M.
The Big Sick is a romantic comedy and far from being a VFX blockbuster but that doesn't mean it didn't need a bit of movie magic to help put the film in the can. Every movie needs some fixing in post whether it was planned or not. VFX shots included adding snow fall, TV/phone screen inserts, splits screens to combine two preferred takes and company logo removals when the brand can't be cleared.
Mechanism Digital was the exclusive VFX vendor completing about 100 invisible visual effects to meet the director’s vision in just a few short weeks. "If we did our job right, you shouldn't see any of our work through the film" says lead artist Fangge Chen.
Mechanism Digital produces on a couple dozen feature films and prime time television series each year from its Manhattan studio leveraging the New York tax incentives and the world-class digital compositors and 3D/CGI talent our vibrant city draws.
Originally published:06/28/2017
VFX: Before, During and After for Festival Films plus VR
Film festivals have three distinct waves of collaboration
It’s been another busy year at Mechanism Digital, working on VFX for award winning films. Some of the films we enhanced were Life, Animated, AWOL, The Lost City of Z and Marathon: The Patriots Day Bombing which garnered awards at Sundance, NY Film Festival, and Woodstock, among others. We are happy to see many of these projects leveraging the Empire State Post Production and VFX Tax Credits which are being passionately promoted by the PNYA.
During the past 20 years of VFX and post-production, we have noticed a curious pattern concerning films being submitted to film festivals: there are three distinct waves during which we get to collaborate and add a bit of movie magic -- before, during and after the festivals.
Before submission, producers want the story locked so it can be submitted... or simply step-up their game a bit to increase the chances of acceptance. For instance, Pimp, directed by Christine Crokos, needed muzzle flashes and enhanced gunshot wounds/blood to increase the impact of the final shootout scene. This additional improvement in effects just might make the difference between going to Park City or not.
During the waiting period, after a film has been officially accepted for a festival but before it’s shown, there is typically a flurry of effort to polish the film for the “win.” Knowing a film will be in a reputable festival such as Sundance can encourage producers to put a bit more skin in the game. They might ask us to add VFX which they may have been on the fence about. Often these are effects the director had been asking for, but the financiers hadn’t yet found the necessary money or time to get them completed.
After the festival is over, we’ll get opportunities to work on films which were purchased and now have additional monies available to digitally rework some shots. Also in situations where a studio like HBO buys a film and has higher quality standards which need to be met before the sale is final.
In 2016, we began working on films submitted to festivals as 360/Virtual Reality experiences. Last year, Tribeca Film Festival introduced Virtual Arcade for VR films including BetterVR’s Killer Deal, a horror-comedy in which our studio added digital blood spurting from a monster being chopped by Ian Ziering, a machete salesman in a surprisingly cheap hotel room. Next, Sundance 2017 is introducing its New Frontier category exhibiting innovative media projects, including several VR films/experiences for which we are in VFX discussions. This new format of storytelling is bringing the same three waves of VFX as standard features bring, although the total running times are typically much shorter. Don’t be fooled by the shorter film TRTs -- the 360 format can often require more time to execute VFX if not planned correctly. Definitely discuss post and VFX needs as soon as possible with a Post or VFX Supervisor whether you’ll need VFX before, during or after the festival.
Originally published: 01/09/2017
Down Home Dispensary
Schoolhouse Rock Meets AI: Crafting a New Kind of Lyric Video
Recently, Mechanism Digital in NYC collaborated with Vector Management to produce a lyric video for the Grammy nominated country artist Molly Tuttle, titled “Down Home Dispensary”. In this project, Mechanism Digital embraced Generative Artificial Intelligence, to elevate production value within the limited schedule and budget. The song humorously discusses the merits of legalizing marijuana in Tennessee, and the band wanted a 3-minute music video reminiscent of the 1970s' Schoolhouse Rock animated educational cartoons.
The band’s fun t-shirt illustrations were an inspiration for the characters, then the next step was to develop storyboards and background images to support each phrase of the song's lyrics. Recognizing the impossibility of producing three minutes of hand-drawn animation within our timeframe, our talented artists turned to using AI assisted tools to amplify the vision.
We studied visuals from Schoolhouse Rock and trained GenAI software to replicate the School House Rock style for each new environment. Then using a ChatGPT a large language model, we uploaded the song’s lyrics, which were then automatically segmented and translated into visual ideas fitting each section of the song. The upside to AI is the number of concepts which can be produced in a short time. The downside of AI is that it doesn't understand which ideas are on target for the concept, which is where experienced artists were necessary to guide AI to yield our final 15 scene concepts.
For each of these 15 scene descriptions, we then employed Stable Diffusion, generative AI image software. Our team input, then iterated each prompt dozens of times, editing descriptive text prompts to generate hundreds of images from which we only picked a small percentage for the video. These chosen images were refined in Photoshop to honor the Schoolhouse Rock style. Final scenes included picturesque Southern landscapes or rolling hills, small towns, tobacco fields reflecting the South’s history, and a whimsically designed “Down Home Dispensary” building. Additional scenes featured politicians, voters, and illustrations of economic growth, as well as cannabis plants and products showcasing the medical benefits. Concept descriptions of the Tennessee Capitol Hill building exterior, and interior were also included. These selects were used by our editor to to produce the animatic – a storyboard set to music using still frames – and presented it to Molly Tuttle for approval of the overall concept.
For animating the band members, we used Adobe's Character Animator, which captured the motion of animators playing "air" instruments in real-time via camera, which was then applied to each 2D character. Val Iancu, our main artist on this project, acted out each character’s movements including the “Joint” character. For lip-sync, AI tools separated vocals from instruments, which triggered mouth movements in After Effects.
We used traditional animation for camera pans, zooms, and synchronized all elements with the song's lyrics. The final product showcased the surprising amount of animation we can achieve in a short time. This project was an exciting exploration of AI tools, allowing our artists to focus on the creative aspects while the computer handled more laborious tasks.
Originally published: 12/21/2023
AI-Assisted Medical Animation
AI can write novels, but it knows nothing about novel medical discoveries.
I’ve written about artificial intelligence resurrecting our dead family members, but even in the short time since we posted “Tell Me a Story Grandpa,” we have found dozens of more practical uses for AI in producing healthcare communications.
The most important thing to know is that the VFX industry still can’t directly use AI to generate medical content. The explanatory graphics and animations that healthcare and pharmaceutical companies need for their educational content must be very accurate and precise, and AI can’t do that yet. Besides that, precious few people on the planet even know about these new mechanisms of disease (MODs) or mechanisms of action (MOAs) before the drug is on the market; an AI wouldn’t be able to illustrate anything about them because these discoveries are not yet in any of its learning models.
That said, AI has become integral in other ways, like producing inspirational imagery for mood boards — one of our pre-production, visual brainstorming tools. AI is also very good at quickly providing many iterations of visual ideas, each slightly different. Our team of humans make the final choice, and then take over to ensure all of the details are accurate. We’ve used this method with great success to create background imagery inside the bloodstream (affectionately known as the “meat wall”).
Or for more detailed elements in a scene, previously we would have had to research and source images, or typically paint custom textures by hand to design the perfect branded (or unbranded) look for each element in a scene. Today using carefully curated text prompts, AI can generate wet, slimy, bumpy, etc. surface textures to be applied to our library of three-dimensional models we’ve developed over our decades of working in this industry. Obviously, the tools for generating these textures have come a long way.
AI has been making inroads into the other senses, too. Voiceover work is increasingly the domain of AI, and there’s a surprising reason why: the fast-talking disclaimers in drug commercials that everyone makes fun of, are actually quite challenging to produce. They typically involve multiple back-and-forth communications with attorneys, and the copy usually changes several times. With the help of AI, we can ensure that there won’t be a change in the narrator’s voice if anything needs to be inserted, sped up, or slowed down.
Creating final animations directly with AI is likely to remain a challenge for the foreseeable future. In addition to the lack of medical accuracy, playing back a sequence of AI generated still images results in flickery motion which is hard on the eyes. Accuracy and flow are two aspects of the creative process that seem to rest squarely in Team Human’s court. For now.
Originally published: 07/13/2023
A Cycle of Innovation and Education
Is the relationship between pharma companies and prescribers more synergistic — or symbiotic?
Many people would be surprised to learn that most of the new discoveries we’ve been seeing in medicine are made by private companies, not government laboratories as dramatized in the movies.
What I call the Cycle of Discovery is an emergent, self-sustaining dynamo resulting from pharmaceutical companies and doctors in a symbiotic relationship:
- Companies research a disease/condition and develop novel treatments to disrupt the disease.
- Medical professionals are educated about the disease and disruption, as well as why the treatments work the way they do.
- Medical professionals prescribe drugs developed by the company, which demonstrate the most current research and progress in treating a particular disease.
- Sales from these drugs help to fund continuing research by the pharma company.
- Rinse and repeat.
It’s a cycle of innovation and education — noteworthy not only for its capitalistic nature, but also because no other organization would be making the same discoveries.
The job of educating practitioners about new breakthroughs in treatments is accomplished with the help of leaders in the field. Key Opinion Leaders (KOLs) are well known members of the medical community who are regarded as resources by their colleagues. Pharmaceutical companies engage KOLs as information conduits between the company and the medical community.
KOLs frequently find themselves at medical congresses, as well as standalone panels and workshops. Events like these have always been relatively easy ways to reach specialized audiences, like medical practitioners. Pharmaceutical companies sponsor educational events which provide Continuing Medical Education (CME) credits and doctors are required to earn a certain number of CME credits each year — so attending a credit-bearing workshop is a win-win for them and their patients.
Doctors don't have much time to learn new stuff — animations at these workshops take months to produce, with all hands helping to boil down various MODs and Mechanisms of Action (MOAs) into easily understood vignettes. The end product is often still too clinical for consumers, but perfect to educate doctors — and help them make decisions and inform their patients.
Originally published: 04/06/2023
AI is creating bionic artists
Is teaming up with AI the answer?
Match-ups between humans and computers were a regular feature of the Cold War chess-tournament circuit. Computers were consistently beaten by human players for half a century, until, one day in 1997, humanity lost. Further research into decision-making would establish a surprising trend. If a human is working in concert with a computer, the pair performs better than the individuals would separately.
This is an important lesson that one should keep in mind whenever a new technology surfaces, including artificial intelligence (AI). The filmmaking industry has already found ways to use AI, up to the point where entire movies can be made using it — but only if creative humans guide it, from script writing through final output. Currently, Mechanism Digital is using an AI in the early stages of a project, mainly to produce inspirational imagery. In another project, we'll be using AI to produce matte paintings for background imagery.
Greg Rutkoswki is an artist whose opinion of AI generators has grown increasingly negative over time. Rutkowski is one of the few living artists that can attest to what it feels like to be source material in an AI’s brain. When an AI famously produced a work that strongly echoed Rutkowski’s Revolution, people noticed.
It’s possible to review the manually input prompts given to Stable AI, and it was revealed that 100,000 humans sat in front of a keyboard and typed Rutkowski’s name — compared to about 3,000 who were seeking images in the style of Pablo Picasso. This drives home the point that people are the ones pulling the strings of AI generators.
Behind the scenes, Stable Diffusion’s parent company is engineering its application to work well within other applications. An API, or Application Programming Interface, is being developed by Stability AI with plans on licensing it to other companies.
When I wrote "Prompting AI Imagery for Production," I raised the question of whether the three big AI generators — Stable Diffusion, DALL-E, or Google Dream, are capable of making art. At first, I didn’t want to call AI-generated images “art,” because the computer wasn’t trying to evoke an emotion. Then I realized the computer is just the tool — and the artist is the person who writes the prompt that is rendered by the computer. Prompt crafting is not so simple, and they usually have to be rewritten many times until the computer produces an image the user is happy with. Today I think AI-generated Imagery is art. It doesn’t require the talent of a fine art painter, but more like the talent of a photographer who finds, or sets up, a meaningful or beautiful picture before capturing the image using technology.
Originally published: 01/25/2023
A Day in the Life of a VFX Studio
I did some sciencing, I got creative, and I would do it all again.
A day in the life of a VFX studio is not that different from other industries. There are budgets, Zoom and Teams meetings, a cross-section of personalities — and a lot of quick problem solving. I recently went through a whole day making a list of all the tasks I completed as I went along.
- Reading scripts and highlighting the areas that would benefit from VFX intervention. This includes scenes with screens (TVs, mobile devices), muzzle flashes, and the occasional scary monster.
- Discussing the list of shots with the filmmakers to better understand their creative vision.
- Brainstorming how to shoot each VFX scene with the Director of Photography.
- Visiting the set to help the production run smoothly and nip expensive mistakes in the bud, like a poorly lit green screen or a camera that is wild.
- Suggesting low-cost and clever alternatives to expensive approaches to VFX. For example, if a camera move needs to land in a very specific position and angle, sometimes it’s easier to shoot the shot backwards by starting in that position.
- Working with filmmakers in a spotting session to discuss shots that can be enhanced with VFX. This includes everything from the removal of a boom mic to complicated split screens that combine two actors from two different takes with moving cameras.
- Bidding all the VFX shots for projects and presenting cost and schedule proposals.
- Working with our team of artisans to problem solve technical and creative challenges.
- Leading a client through a design process from research to storyboards, styleframes, layout modeling, animation, rendering, effects — and the final render and color correct.
Just as the day was drawing to an end, we received some files from a new client. This was the production company for an indie film that we had agreed to give a discount. We often work to help independent filmmakers to arrive at a slightly discounted price — but, inevitably, there are always little fires.
Because every frame we work with is an individual file, very specific naming conventions have to be followed in order to keep track of everything. On this day, the files from the studio were in different formats, the wrong color space, incorrect codecs, and disorganized naming conventions. It was up to me to make time to spend with the client, with the ultimate goal of empowering them to re-deliver everything correctly so my team could focus their time creating VFX. These investments in filmmaker relationships always pay off the next time we work with them on bigger projects.
It was an epic day, and we all know how those end: Rinse and repeat.
Originally published: 09/08/2022
Keeping Culture in a Remote World
Virtual Teams will be social if you let them
The shift in paradigm from working in offices to working from home is not without its casualties. Namely, the company culture. When you're in the same office with your colleagues, you interact with them all day long — but it’s the kind of thing you don't realize until it's not there anymore. We're passing by each other's desks, or at the water cooler, and chatting. That doesn't happen as naturally or as organically when we're all online, because you have to actually click on Zoom and call somebody deliberately. This has actually been good for productivity, because you don't get distracted all the time. As humans, I think we all need that interaction with each other.
It’s up to the leader of a company to be deliberate about making sure that the culture doesn't get stale or lost or forgotten. If we don't deliberately create opportunities or channels for being social, culture tends to fall to the wayside. That’s why we implemented some ways to keep culture.You don't see people by the water cooler or the coffee maker anymore, so we created a water cooler channel in our chat system. People add funny things or videos we think are interesting, or funny gifs. I used to think sending these things was just causing people to avoid work, but now I realize that it is necessary to do a bit of that in moderation. That's a passive way of keeping the culture.
We have an active culture-building event about once a month when we get together for Family Game Time. We usually play a video game together like Among Us, or a virtual escape room. Drawful is a cute, Pictionary-type game. Family Game Time encourages people to get together — and if you want to drink a beer, that's fine too because they’re always towards the end of the day.
The mornings have a different vibe, starting with a meeting and what we call the fun share. We all talk about something that happened yesterday or what we're going to do over the weekend or what we did for the weekend. Sometimes it's what somebody cooked last night for dinner, or a movie they saw. These morning meetings are supposed to be 15 minutes long, but sometimes we find an hour later, we're still chatting about some stuff.
When I put my owner-of-the-business hat on, I'm tempted to think, "Wow, we just wasted an hour here of everybody's time." But, realizing how it pulls us all together as a team is very necessary, and that we would have used up that hour easily around the office.
Another thing we do a few times a week is co-working time. We all just turn on our cameras, and we created a virtual co-working space. We'll just work knowing other people are there. Sometimes somebody will say something or make a joke and everybody laughs and then we go back to work. I think in the future, we might work like that a lot more.
All these little steps have helped to keep everybody close. Since the beginning of remote work two years ago, about half of the staff is new. Even though they're in the New York area, I've actually never met them in person. But I feel like I know them very well. I think that culture is an important human need that we've been able to address. But we had to be intentional about it, because it wasn't happening naturally with the remote tools that we've all been using over the last two years.
Originally published: 05/20/2022
Credit Where Credit Is Due
Guild rules, status, and careers all come into play behind the scenes of every opening credit sequence.
In my last post, I wrote about some of the creative decisions involved in orchestrating a film’s opening credits. Design is only one of the many considerations — what’s even more complicated are the politics, union rules, and hierarchy of actors and crew.
Each credit in an opening sequence is commonly referred to as a “card.” You may have noticed that the biggest stars tend to have their own cards just before the film’s title. The other contender for the final pre-title card is the director. In general, the closer your card is to the title of a film, the more creative influence you had on the film. If an actor or crew member does not need to share their card with somebody else at the beginning of a movie, it’s a big deal. It means the person has “made it” in the film business.
Opening credits, also known as pre-credits, can be classic white over black or integrated with establishing footage. In the latter case, the appearance of each credit tends to alternate between the right and left sides of the frame to maintain a sense of balance — but that can all be subject to change in order to fit the screen composition; in some cases, footage is swapped or re-ordered to better balance the credits.
Names on single cards are typically larger than shared cards and often negotiated as a percentage of the main title font size, but sometimes creative decisions of the opening sequence design force important names to be shown smaller. In this case, we actually have to obtain a formal waiver from whichever guild that person belongs to (Screen Actors Guild, Directors Guild, Producers Guild, etc.).
Some films don't show any opening credits at all, instead opting to go straight to the title card or even a “cold open” where the main title doesn’t come up until after a dramatic first scene. That's a creative and emotional decision to forego credits in favor of jumping directly into the action or if there are no notable people involved with the film to raise audience anticipation.
What about the ending credits? They are far more standardized, rolling in the same order for every film, and everyone’s names are the same size. In fact, end credits are usually made by entering the information into an online service like Endcrawl, which produces a piece of video easily added to the end of a film.
Comparatively the opening credits and title sequence is a much more creative bookend of a film. We're always honored when a director chooses us to work on opening credits because we're being trusted to kick off their baby in the right way.
Originally published: 03/11/2022
The Voice of Reason
Voiceover production considers everything — from tone to timber.
In my last post, I introduced you to the basics of voiceovers — from how the industry has become remote-friendly, to choosing a voice that stands out in a crowd. In this post, I delve a little deeper into the creation of impactful videos.
During VO recording sessions, audio studios typically interface with both clients and ad agencies. Some agencies are able to make decisions on behalf of the client, other clients prefer a more hands-on approach. It all starts with listening to demo samples of various talents from the web. This is usually enough, but if the client or agency wants to hear their specific content auditioned by particular artists, test recordings can be arranged for a lower cost than the final hero session.
Throughout a project’s production, visual elements are edited on a time-line according to a scratch voiceover. This no-frills voiceover is typically read by an editor or a producer as a track to set the timing for all the other elements. Once all the kinks and idiosyncrasies are worked out of the script, it’s not unusual for the talent to swoop in and lay it down perfectly the first time.
That said, in the spirit of not having to do things twice, seasoned producers also take “safeties” of any word which can be pronounced in more than one way to be kept in their back pocket to avoid re-records. Even for a pro, speaking a single word on its own sounds different than when it is spoken in a sentence, so it's best to record the entire sentence as a whole — which is well worth the investment despite the session time adding up. It usually takes about an hour to record five minutes of content.
Every once in a while Mechanism Digital works with a celebrity on a voiceover. The following anecdote concerns a voice nearly everyone is familiar with. The recording session had gone smoothly and at the end our client said: “Amazing! Your voice is perfect for our annual meeting’s video. As a last request, can we also ask you to say , ‘This is ______ ______, and I welcome you all to our annual meeting’?"
The actor/narrator responded, "Wait a second. If I'm endorsing your company with my name, that's a whole different conversation than just having the sound of my narration." His rich voice diffused any potential awkwardness in a way that didn’t imply the client was out of line. It was a live demonstration of what the right voice on a project can accomplish.
Originally published: 01/27/2022
Do Not “Shoot and Ask Questions Later”
A five-minute conversation can save five thousand dollars.
“We’ll fix it in post,” is something people say a lot in the film industry. Translate this phrase into producer language and it comes out as, “We’ll have to find more money later.”
Small decisions on set can have huge cost implications in post-production, like shooting with a sign in the background. Reasons for removing signage vary, but often it involves the legal department not clearing a brand logo or a modern sign that doesn't fit in a period story. Even stories based on a news event five or 10 years ago require careful background attention when on location, especially in busy cities.
In general, having to remove or replace anything in post is made more difficult if actors are moving in front of the object. It's also much easier to replace or remove a sign in post if the camera is not moving. (If desired, adding some camera motion is relatively inexpensive in post.)
Green screens have become such a part of shooting production that VFX studios sometimes have to temper a client’s enthusiasm for them. Filming an actor who is standing directly on a green screen will most likely result in a hefty visual effects bill. It often takes three times the work to make it appear as if the person’s feet are grounded, which requires matching angles of the ground and shadowing. One simple solution is to shoot “cowboy” which is framed just above the knees.
It's in everybody’s interest, including the visual effects studio, to help reduce costs — as counterintuitive as that sounds from a business perspective. Good artists have more than enough on their plates and would prefer to be working on exciting effects that enhance the shot — as opposed to hours of rotoscoping someone's blurry hand because it's crossing in front of an unwanted element that needs to be changed.
The key to reducing costs is to speak with a visual effects studio before principal photography. I never get tired of working with filmmakers to brainstorm different approaches to shooting a scene. Even before my team is formally hired on a project, we want to know if it's a right fit for us and the filmmakers also get to learn what the VFX studio can bring to the table. We love our craft.
Originally published: 11/30/2021
Educating for Disease Awareness using Animation
In corporate communications, complex biological processes can be represented in clear animations that bear little resemblance to the actual physiology inside all of us.
If you stuck a camera inside the human body, you wouldn’t see much more than tissue pressed up against the lens, AKA “the meat wall,” looking a bit gross. That’s why the medical community understands that sometimes clarity needs to take precedence over medical accuracy in corporate and consumer communications.
Healthcare graphics and animation have many of the same phases as the narrative short films like Pixar produces — but they are required to be much more sensitive in the details around legal facts and medical accuracy.
The mechanics of proteins and genes populate the majority of Mechanism of Action (MOA) and Mechanism of Disease (MOD) animations. Understanding microscopic physiology is a prerequisite to producing high-quality animations. Some studios don’t have certified medical illustrators on staff, but their producers should be comfortable working with agencies and the client’s medical directors. Throughout the process, the studio should have productive meetings to discuss research materials without requiring low-level explanations from the medical director in order to keep the project on track. The finished product must depict these “mechanisms” of physiology accurately to tell a story that illustrates how the disease affects the human body and or how treatments benefit the patient.
Those mechanisms look very different on screen than they do under a microscope. Over the years, medical illustrators (in books or motion) have depicted tissue, cells, viruses, proteins, and synaptic signals as recognizable distinct objects for the ease of communication.
For instance, under a microscope, the double-helix of DNA looks nothing like the models and illustrations we’re used to seeing in explanatory illustrations.
Another good example of creative illustration is the depiction of single cells and proteins as floating individually through an empty cavern of space, then connecting with what seems to be a sense of self-awareness — as opposed to the real-life randomness that governs these microscopic elements of physiology.
Medical animations can easily cost six figures, and take three to six months to complete and pass through the legal process. A clean and organized proposal — with a clear scope of work, assumptions, deliverables, client expectations (who supplies the script and medical director), deal points, and most importantly, a detailed schedule outlining major milestones or phases of design and animation — is a sure sign of a studio that has its act together.
I hope this gives some insight into choosing an animation studio that fits with your needs and style. Please watch out for the second part of this article in which we discuss the added complexities of production when working on a medical education animation, including the legal stuff.
Originally published: 10/28/2021
When NOT to use Green Screen
Background screens come in many colors — are you sure green is the right one for you?
One sight that’s ubiquitous in behind-the-scenes footage from movie sets is the green screen. Green screens are a way to insert backgrounds into a scene while avoiding the time-consuming and expensive process of rotoscoping (painting digitally frame by frame) around people.
This method takes advantage of the fact that green is the least common color in human skin and clothing. VFX software can then isolate the green color to separate the background from the foreground people. This is called chroma keying or simply “keying.” But… green screens are not right for every situation — which is why starting a dialogue with your VFX shop before you shoot is so important.
For instance, a performer like Kermit the Frog should obviously be shot in front of a blue screen. Blue screens were once the most common way to insert a background; meteorologists were famous for doing their forecasts in front of a blue screen. Historically, the blue channel in film had the least grain resulting in cleaner edges, but now video compresses the blue channel, and the green channel is a higher quality with the absence of film grain. The transition to video and digital capture has shifted the majority of the industry to use green over blue except when the foreground subject dictates otherwise.
If we are shooting elements separately in order to keep the actors safe — like fire, explosions or bullet ricochets — we isolate these elements by using a black screen. Black screens maintain the subtle luminance in the edges of bright elements. When you see a bullet ricochet off of something or a gun shooting sparks and smoke, that element was probably isolated using a black screen before it was added to the shot in post.
So within a particular scene, the actors might be shot against a green screen while the explosion is shot against a black screen. We can pull those elements together and composite them at the highest quality.
I've never heard of a situation where a red screen was used, although it would work for shooting the Blue Man Group if Kermit was a special guest on their show.
Excessive spill or reflection of a screen’s color on your subject can be avoided by keeping six feet between the actors and the screen when possible. Sometimes the background to be replaced is a phone screen with fingers typing over an image to be inserted later. In this case, I wouldn't suggest a green screen because the fingers are too close to the screen and green light will be reflected onto the skin tones. For these cases we suggest a 50% grey screen image which illuminates the fingers and also allows us to capture the screen's natural reflections for realism. Grey has no chroma (color) information, so we use the luminance (brightness) to separate the elements.
I often meet with the DP (director of photography) and gaffer (the crew responsible for lighting a scene) during pre-production to discuss the use of any screens and ensure that post-production process goes smoothly, and avoids expensive surprises.
Having conversations with a VFX team before a shoot is good due diligence. Any serious VFX shop will be happy to have a friendly conversation before a shoot. Or if you find yourself on a shoot and unsure about something, snap a photo and send it over to me for some real-time advice.
Five minutes on the phone can save $5,000 in unnecessary post-production labor.
Visual Effects for Film & TV - Movie magic for Filmmakers:
Originally published: 08/24/2021
Teasing a Great Performance Out of Texting
Producers and directors used to run from portraying text messages in films — until they got the message that there is no place to hide.
Over the last 15 years, the use of texting and social media as essential plot elements in films has grown exponentially. At first, filmmakers were hesitant and pushed back. Directors didn't want social media and texting to be in their films spending precious run time focusing on phone and computer screens.
Feelings gradually changed as it became clear that the new technology had become a part of our lives that writers could no longer ignore. After all, films are generally about relationships — and texts and messaging are often the glue that holds them together in a practical sense. This has led to the industry embracing the use of texts and chats in creative ways.
One way to set up this story-telling device is to show text conversations across the “fourth wall” as a graphic overlay. This is common, and a lot of TV shows have excelled at designing overlays. With an overlay, there is typically text magically popping up across the screen in real time as we see an actor using their phone in a medium or wide shot. Although this works well in TV where graphics are often more acceptable, feature film directors typically avoid this approach as they can disrupt the audience’s emotional connection with the story.
Most films we work with try to shoot phone and computer screens practically when possible or have our VFX artists key, rotoscope and composite screen inserts into footage during the post and finishing phase. Compositing is always more expensive than shooting a phone in-camera, but often necessary as actors may mistype text messages with their thumbs on set.
Now there is an alternative as we recently developed an innovative solution for a film requiring large portions of its plot to be revealed over social media and texting. We brainstormed some ideas and ultimately decided to program a messaging app into which we could load all of the conversations from a script. As the actor typed on desktop or mobile phone keyboards, the letters individually popped up and simulated sending a text message.
The app solution has several benefits:
- There weren't any typos because the texts were preloaded. Even if the actor tapped a wrong letter, the text message came out exactly the way the script called for.
- When the actor hit send, there was a user-defined time delay based on how long the receiver would take to read the text before responding.
- Then “flashing little dots” that indicate the other person is typing were also included and used several times by the director as a dramatic anticipation device.
- A preloaded response popped up and the conversation could be as many lines as necessary.
- The app is controlled over the web by a separate laptop, allowing directors to quickly modify conversations during shooting.
- No concerns about getting permission from big tech companies like Apple or Facebook to use their products in a film or TV show.
Our artists designed both a mobile text interface and desktop browser — which our team affectionately called “Fakebook” — that looks like a combination of all the different social media sites combined. Producers send us photos and names to pre-load into profiles on the fictional social media site, thereby creating a backstory for characters. The apps allow actors to scroll and click on web pages helping them to be in the moment, while saving a tremendous amount of money and labor in post-production VFX.
In the future, we’ll all be wearing high-tech AR glasses that display text directly in our field of vision at which point we’ll have to rethink how texting should look in films, but in the meantime, we are excited to see this creative use of technology help our clients use text-based devices to tell compelling stories.
Originally published: 06/08/2021
Uncut Gems – How do you dangle Adam Sandler out of a NYC window?
#vfx #film #movies #postproduction #behindthescenes
The directing duo; Benny and Josh Safdie, are a pleasure to work with on set and during post-production. They set the bar high and their latest project, Uncut Gems, is another unique film to add to their growing body of work. My team, at Mechanism Digital, was excited to work with the Safdies. Like all films, we looked forward to learning a few tricks and embraced some happy accidents. Our NY studio was brought in on set to supervise the visual effects, shoot VFX plates, then execute about 100 shots for the film. Both on set and during the few weeks of conform and color, there are a lot of moving parts that require fast and crafty decisions which could impact the finishing schedule and costs. Looking back, my team remembered three pivotal VFX shots which required a little extra creative brainstorming.
First shot which stands out in our minds during shooting was when Howard Ratner, played by Adam Sandler, was being hung out of a window high above the 47th street diamond district.
We were provided a beautiful set-piece fabricated to look like the brick exterior of Howard’s 10th floor office window, except it was only 10 feet off the ground with green screen all around it. So far, this looked like an easy shoot. Adam came in and was characteristically blasting classic rock tunes on his portable speaker while camera and lighting were making some final adjustments. Upon shooting the first take, an unexpected issue became painfully obvious. As the two big hit-men threatened Adam by dangling him out the window (held by a safety cable/harness) the force of the three men struggling was causing the walls of the facade to wobble back and forth. Cut… watching playback, I discussed with Benny and Josh how the wobble made the bricks to look like rubber and our VFX artists may not be able to match the real building’s solid facade. We decided to take-five while they had a carpenter fly in to brace the structure from behind to reduce the flexing of the thin wood. 20 minutes later we were back at first position and the actors were wrestling Adam upside-down through the window again…. Cut.... The set was wobbling less, but I had to warn that it still may slide around when we tracked it onto the real building to appear 10 stories up. We were now behind schedule and I had to decide: Should I recommend the carpenters come back in and we wait another ½ hour, losing precious crew time on set, or figure out how to solve the wobble in post? After thinking through digital approaches, including tracking and warping, I was confident we’d be able to solve the problem back in the studio, even if we had to completely roto the actors and replace the set piece, which in fact turned out to be our end solution. The shot came out great and we were able to absorb the cost in the overall budget, which meant no surprise overages for the filmmakers and we learned a good lesson to watch out for in the future.
Another shot towards the end of production involved a recreation of a court-side interview with Kevin Garnett, aka "KG", of the Boston Celtics during the 2012 playoffs.
We needed a background plate of a basketball venue to fill in the blue screen background. We searched through stock footage but couldn’t find the right crowd or angle to match the low camera looking up at the towering KG. Next, we considered animating CGI extras, but this is incredibly expensive for a single shot. I asked if anyone had access to an event at Madison Square Garden event, after all, it was during March Madness. Josh jumped up, “I have an idea! I’ll be in Boston this weekend. I can ask my buddy, who is connected with the Celtics, and ask if he can get me seats to the game. I bet I can just walk out on the floor and shoot some footage on my iPhone. Would the resolution be high enough?" "Sure!", I said, "The background was going to be in soft focus anyway, just try to hold the camera still for a few seconds, that’s all we need." The lower quality of the iPhone was fine, once we converted it to LOG color space. Josh even got to enjoy a basketball game to see his home team win.
A third challenge, or “opportunity” as we like to call them, was a scene on the streets of NYC following Howard and his father, Judd Hirsch, to his car.
The editor noticed, in several shots, across the street was an obvious Citi Bike station, which hadn’t existed during the film’s setting, and asked if we could paint them out. In addition to the hand-held camera following the actors, the bigger issue was the pedestrians on the other sidewalk walking behind the bikes. I explained to the editor and directors, the idea of rotoscoping and rebuilding the crowd’s legs between the spokes of each wheel was out of the question in terms of labor costs. Josh and Benny were debating whether they could live with this inconsistency and played down the whole scene to see how much the bikes stood out. This is the first time we had the chance to see the whole sequence alongside the offending shots. We noticed other angles of the scene revealed a construction site lined with orange and white barricades along the street. Our suggestion to hide the bikes by compositing a few more construction barricades at a fraction of the cost was a big relief to the directors and producer. Our digital artists were also relieved we wouldn’t have to roto spokes!
Sometimes solving a problem on set or in post is not a straightforward textbook example. As VFX Supervisors, we’re there to help the entire team make quick, creative, confident decisions to keep the director happy with the final look while keeping production on budget and schedule. Believe it or not, we do look for unique situations where we actually save everyone money by saying those dreaded words, “We’ll fix it in post!
Originally published: 02/15/2020
VFX saves money on-set and in post
By utilizing VFX, a film can be written 3.5 times
Generally considered expensive, I often hear VFX getting a bad rep when it comes to a film’s budget. Sometimes at Mechanism Digital, we feel like the dentist that nobody wants to go to, but know they have too. I’d like to help turn around the negative notions and help position VFX as an opportunity to help save on costs.
When our VFX studios are doing big VFX for blockbuster films like Transformers or Jurassic Park, then sure, those are very expensive, but the larger number of shots we work on are more utilitarian, fix-it in post type shots to help push a film over the finish line.
In preproduction, a good VFX supervisor can make suggestions for digital set extensions to be added in post to augment a much lesser expensive location. For example, Bravo’s television series Odd Mom Out asked us to add 10 floors to a two-story hotel making it more grand to fit the story-line.
In Prepro, planning crowd duplication will streamline production tremendously by shooting 10-25% of the needed extras by crowd-tiling. We are currently working with Joshua Marston on his feature Come Sunday which effectively used this strategy for a large church audience by locking the camera, shooting the crowd in orderly sections, and shuffling actor’s positions for each take.
Additionally, during the edit, we are seeing more split screen work these days. The Big Sick employed this technique about 50 times giving the directors Michael Showalter more freedom to combine actors’ performances from separate takes. This isn’t so much a new technique, but I think editors are learning they can take advantage of this trick. Even with handheld cameras we can marry two different shots together seamlessly. This can even help on set if a director has two good but separate takes, the crew can move on and stop spending costly minutes for that “one more take”. We have also used this orchestrated technique in Pre-pro for ABC’s new Deception series to create twins from one actor so he could interact with himself.
They say a film is written three times; once as a script, again on set and a third time in the edit. It’s common to use a shot intended for one part of a film to help fill out another scene. VFX can help with continuity as in Love is Strange where, John Lithgow, is married during the film but we needed to remove his wedding ring in a shot that was used in a scene prior to his marriage to Alfred Molina. With VFX, maybe a film can now be written 3.5 times!
Although many VFX are not executed till after the edit is locked, it’s never too early to get the VFX house involved. When in pre-production, during shooting or mid-edit don’t hesitate to reach out to your favorite VFX team and start up a conversation. All parties, the film and even the audience will benefit!
Originally published: 10/02/2017
VR as Trojan Content
key marketing opportunity was putting logos on thousands of executive’s desks
Remember the ADP VR/360 project we shared a couple weeks ago?
In addition to the three-minute visual experience reinforcing ADP’s reputation as a cutting-edge company, the key marketing opportunity was delivering thousands of bold-branded Google cardboard viewers which now shout "ADP" on thousands of HR executive’s desks.
Our team of artists was excited to combine the CGI and VFX components with the live action for the project. We hadn’t yet seen a lot of graphics seamlessly combined with stereo 360/VR video, although it’s quickly becoming our most common request since the fast-adoption of mobileVR and creative agencies looking for innovative twists on media.
Digital agency York and Chapel was tasked with presenting ADP’s newest software update using an innovative technique that would attract the attention of busy HR professionals. The production company, Malka Media, directed by Jeff Frommer asked us at Mechanism Digital to design an eye-catching concept to add a new dimension into 360/VR. After a few days of partnering with the client to learn the goal of the campaign, Mechanism’s lead designer, Fangge Chen, and I developed an idea displaying a virtual array of CGI holograms which would float in space around the viewer and be triggered by actor’s interactions.
As VFX supervisor, I worked closely on set with Ben Schwartz, arguably one of the best 360 DPs in NY, who shot the live action back plates using a Nokia Ozo 360 stereo camera. The camera was mounted at eye level on a remote-control rover giving the ability to drive around ADP’s funky Innovation Lab in Chelsea and visit different rooms. Because the camera traveled about 60 feet we were not able to monitor a live/wired feed from the camera and had to review the footage between each take using the Ozo’s convenient cartridge. With the need for long takes and about 40 actors, I don’t think we could have pulled off the shoot as fast as we did with any other camera.
In post, CGI & VFX my staff at Mechanism digital worked carefully to track and seamlessly composite holographic design elements into the footage. We built and animated elements in After Effects, rendered them into stereo-space using Maya 3D outputting the equirectangular format to be seamlessly composited back into the live action using After Effects with Mocha VR & Mettle’s Skybox and finally laying back the spatial audio in Premiere.
We are proud of the spherical content and attention to detail that went into the project, although it makes me laugh to think the film was a bit of a Trojan Horse. We trust the customers still enjoyed the experience while learning about ADPs products. I expect we’ll see more smart agencies using this content strategy as an excuse to deliver branded desk toys to customers.
Mechanism Digital is a CGI, VFX and New Development studio which collaborates with brand partners to develop strategies with innovative media.
Originally published: 05/24/2017
Mechanism Digital Celebrates three decades of CGI and VFX in NYC
Oldest enduring CGI and VFX studio in New York
The oldest enduring CGI and VFX studio in New York continues to innovate and originate.
An award winning production studio in New York City since 1996, Mechanism Digital provides animation, design, visual effects and new media development for the film, television and advertising industries. Mechanism Digital is a smart and friendly group of passionate creatives who are committed to helping entertainment and marketing professionals tell their stories in memorable ways.
Founded by Lucien Harriot, a veteran VFX Supervisor, Mechanism launched into the effects industry three decades ago with a single Silicon Graphics “Super Computer” on a Spike Lee film, HBO promos, a David Letterman Top Ten Countdown and toy spots.
After word got out about the high-end East Village independent studio, his business started booming. At the end of the first year, and with the studio’s single room bursting with four SGIs and a half dozen other computers, it was time to relocate into a Soho industrial loft. Currently maintaining 15 artist workstations and rendering on 30 nodes in mid-town, Mechanism Digital is New York’s biggest little post-production secret, working a thousand shots each year for feature films, TV series, marketing media, medical visualizations and more.
The Mechanism Digital team is always prepared to deliver quick bids and shines at both the technical and creative processes behind the scenes. In addition to camaraderie on set and friendly rapport with DI houses, the team is very comfortable in the complex processes behind successful VFX, turning around shots and revisions with efficient online communication channels. The team is always excited to develop creative solutions to keep costs down without compromising the director’s vision.
Together with A-list partners in the film/TV, advertising and medical industries, this cutting- edge studio continues to grow and prosper. Just this year, Mechanism Digital produced hundreds of shots for several high profile feature films including American Ultra, starring Jesse Eisenberg and Kristen Stewart, and has served several prime time TV series currently on air.
Mechanism Digital continues to drive the industry with innovative technologies, software and hardware for its clients. With over two decades of experience in 360° panoramic photos and video for special events projects, the studio is now riding the wave of Virtual Reality as new consumer devices such as Oculus Rift and Vision Pro flood the market. 360 VFX, graphics integration and post-production have become strong areas of growth for Mechanism Digital.
“After 30 years, I still sleep, eat and breathe CGI and VFX,” says Lucien. “The fast paced advancement of this industry means there is never a dull moment and I am super excited to be working with creative clients and great projects. Every day we feel a buzz of energy at our studio as our team encourages and inspires each other to elevate every project with movie magic.”
Originally published: 08/02/2016
AI is a Tool for Artists
Intellectual property law has a lot of catching up to do.
Perhaps to the chagrin of Drake and The Weekend, artificial intelligence continues to adapt to its environment. In the VFX world, however, AI has not replaced any of our tools yet — but it is changing how we start the work on a new project.
One reason AI is not much help beyond that initial step is the potential for copyright issues. All of the big image diffusion companies — DALL-E, Midjourney, Stable Diffusion — train their systems on copyright-protected content. That creates a sort of intellectual-property-based Sword of Damocles hanging over every image these platforms produce. Not only that, the gray areas of intellectual property law are numerous — and gray areas in particular often require astronomical legal fees to settle or litigate.
In contrast to the big three, Adobe has trained its diffusion model, Firefly, on only the stock footage that the Adobe company owns, licenses, or is otherwise permitted to use. This system is not yet perfect, as artists are allegedly not being paid for all of the uses they’ve been racking up. Adobe is currently working on ways to compensate artists/photographers for using their imagery, although without the artist's opinion.
The other reason AI is not very useful after that initial stage is because it can’t really produce acceptable video yet. We're in the business of creating motion images, and the best looking visuals AI can do, currently, is a still image. The “holy grail” for VFX artists would be a means to convert AI imagery directly into 3-D models. Then the client would be able to essentially sign off on the look of an entire project much faster, and we as artists would be able to push the envelope on what’s possible.
When people say “AI is a tool,” I like to think of actual tools. Imagine the great impact the hammer must have had on civilization. We are in the middle of a similarly shifting period of time, and I think most artists are already excelling at the new tech.
So while our tools have not been replaced, they have been dramatically affected by generative AI — along with our workflow. We’re now able to quickly explore concept images with directors and producers, even though we won’t use the imagery directly. At this stage, AI is best leveraged as a brainstorming tool that’s available to us for whatever uses we can dream up.
Originally published: 10/04/2023
Tell Me a Story, Grandpa
Can we store loved one’s personalities in a computer after death?
Fortunately or unfortunately, I don't think humankind will ever figure out a way to upload human souls to a computer, like in the “San Junipero” episode of Black Mirror. No matter what you feel about spending eternity in a benevolent computer, there is an alternative that everyone can get around.
Imagine instead a computer game called “Tell Me a Story, Grandpa.” After feeding it enough information, the game would produce a life-size, moving-and-talking avatar of a deceased loved one. Simply upload their biography, photos, home videos, social media accounts, email and chat history. Artificial intelligence applications, like ChatGPT, could easily fill in the blanks and dynamically generate a live conversation from a loved one who has passed on.
The components for this game of the future already exist in our world. There are technologies that can listen to a person speak and then mimic their voice based on text prompts. Depending on the context of the writing, today’s technology can make the voice sound happy, sad, serious, etc. Already available lip-syncing applications can work in concert with three-dimensional, model-building tech that produces a countenance in relief and from all angles.
The famous Morgan Freeman deep fake brought knowledge of so much of this technology to the masses, and, anecdotally, people seem to have become a lot more skeptical of every video they watch. At the same time, Apple is entering the virtual reality headset market — while televisions are getting wider and wider, filling up more and more of a human’s field of vision.
I think it’s great news for developers that Apple is getting into VR. It also promises to be a coup for end users, who will no doubt benefit from having a standard to coalesce around. Apple usually pounces into a market only after watching other companies fumble through it, so it will be interesting to see what their team found.
Humans will do anything — and probably, pay anything — to take in a good story, from installing intense television sets to wearing a VR headset while your cat looks on, rubbing its front paws together.
Originally published: 05/18/2023
Mechanism Digital VFX Provides Crucial Assistance in Completing Films for Sundance
Mechanism Digital VFX Provides Crucial Assistance in Completing Films for Sundance
As post-production professionals, we understand the significant role that digital visual effects (VFX) play in film production. In many cases, VFX can be the key to completing a film and bringing the director's vision to life. This was certainly the case for films at this year's Sundance festival. Weather can often be an obstacle for filmmakers, especially those who require specific environmental conditions to achieve their desired effect. For example, the production of Ottessa Moshfegh's award-winning novel, Eileen, required 100+ shots with snowfall and ground cover plus period cleanup to transport actors Anne Hathaway and Thomasin McKenzie back to 1964.
Another film showcasing the power of digital VFX is Sundance Grand Jury Prize winner, A Thousand and One, set in 2001, requiring visual effects to portray the story's timeframe accurately.
Sundance Winner and Independent Spirit Award for Best First Feature
The impact of digital VFX is undeniable, and it's exciting to see what the future holds as technology continues to advance. As post-production professionals, we are crucial in bringing these films to life and creating genuinely immersive cinematic experiences.
Originally published: 03/18/2023
Prompting AI Imagery for Production
How do you train a computer to create art?
Artificial intelligence has evolved quickly these past few years, but recently there have been a number of breakthroughs in computer-assisted image generation — in both capability and consumer adoption.
Three big players in the AI sandbox include Google Dream, DALL-E, and Stable Diffusion. Stable Diffusion (SD), in particular, makes it easy to start playing immediately without technical installations.
Any AI that renders images does so by drawing on a little bit of everything; from hundreds of specific images to billions of random images. It is important to the process that the images have metadata — especially descriptions — either written by humans or from image recognition processes.
Google Dream’s evolution demonstrates how the training process works. Initially trained on images of dogs and other animals, most of its early imagery looked like strange, warped, trippy animals. As the technology has matured in the last few years, people are training it with a wider range of images with better outcomes.
SD’s claim to fame is different. It has gained a lot of attention for its behind-the-scenes hustle, which resulted in seed capital (or first-round investments) totalling $101 million. That is the largest seed investment any company has ever received. There are a lot of other companies that have received more money as they get to the second or third rounds — but as a first-round take, this was unprecedented.
Over on the public-facing side, SD allows the user to type in a description — and it creates an image of what the text describes. These text inputs are called prompts. The prompts can be descriptions of an object or a place, like a car or a house, but they can also be descriptions of a style, like watercolor or oil painting — or even a particular artist, like Vincent Van Gogh. It can also take a prompt in the form of a drawing to guide the image’s composition — for instance, if you draw a cube at a three-quarter angle and along with a description of a house, SD will create an image of a house at the same angle.
You can even draw or describe whether the camera lens is a long lens or a wide-angle lens. The AI will also be influenced by which images are considered pleasing to the eye.
AI renderings are a combination of:
- the model it was trained on;
- the prompt it was given; and
- the image or inpainting it was fed.
Whether Google Dream, DALL-E, or Stable Diffusion produces actual art — is the topic of my next article.
Originally published: 01/20/2023
The Intersection of Practical and Digital
Embellishing critical frames helps films pop.
In creating a movie, there are practical effects and digital effects. Practical effects generally get shot on set, like when a bottle that is made of candy glass is smashed over somebody's head. We don't do that.
We do sometimes use practical elements, like fire or smoke. In the case of smoke elements, they are shot on a black screen, as opposed to a green screen. Mechanism
Digital is currently working on a project with the aim of adding noxious smoke to the inside of a car by using elements of cigarette smoke, which has its own unique characteristics. We didn't specifically want cigarette smoke — we’re depicting exhaust — but cigarette smoke demonstrated accurate motion for this particular scene.
Our shop is finishing a thriller called Eileen, starring Anne Hathaway and Thomasin McKenzie, which is set in 1962. Half the shots are exteriors of a small town. We try to choose towns that still look true to the period, but sometimes it’s hard to avoid a few modern details such as street signs or store logos which can be erased in post by the VFX team. One of the shots called for our heroine to enter a liquor store. The locations department found a flower shop for rent and instead of fabricating and installing the obligatory neon sign, we were able to dress it up digitally in post with movie magic. The film is set during winter so we also added falling snow and accumulation on the many roofs she would see while driving through town.
Guns and gunshots are another great example of using practical elements in a VFX setting. More and more movies are not putting caps or blanks into real guns, instead opting for non-operable “rubber” guns on set. A non-operable gun is as it sounds; you can't put a bullet in, it's impossible. As a result, we’ve been working on projects where we digitally manipulate guns to look like they have a kick back by warping the frame a little bit if the actor doesn’t mimic the action realistically.
Although a muzzle flash only lasts for a frame, they can be some of the most dramatic frames of a film. We further embellish the muzzle flash by adding shadows on the wall, often including a coup de gross of flying blood, also casting its own shadow.
Even before the many bans on operable guns, Mechanism Digital did this kind of work because muzzle flashes frequently happen in between the frames of the film — not showing up in the footage. I’m grateful that we perfected our process to meet the increased demand for these kinds of effects. We absolutely love this industry!
Originally published: 08/30/2022
VFX Eye for the Budget Guy
A visual effects supervisor is like post insurance for your shoot..
Sometimes my work takes me to interesting places, like movie sets. I work on set as a consultant, specifically a “VFX supervisor.” My job is to make sure that mistakes aren't made that would result in additional VFX work in post-production.
Practically speaking, I’m an insurance policy. The director and director of photography (DP) rely on me to assure the shooting team that they aren't inadvertently creating expensive work in post. To do this, I'll have conversations with the director and the DP about the script and how to plan some of the shots as well as other helpful conversations each day on-set with the shooting crew.
Once in post, it’s necessary for VFX artists to know where all the lights were placed, and their angles in relation to the camera, for a given shot. In addition to acting as consultants on set, VFX supervisors take a lot of photographs, measurements, record lens data, and sketches that depict where the camera and lighting were placed.
Right now I'm consulting on a feature that shoots 27 days on location. For this project, I’m only on set for about half of those shooting days because a lot of the scenes that require VFX will be filmed on the same day. On the days that I am on set, the production department will do their best to combine shooting schedules for VFX scenes on the same days. For this project, the script calls for snow in the backgrounds of all outdoor scenes. Fake snow on the ground is relatively easy to do with practical effects like ice machines and snow blankets or blowing potato flakes / soap foam if it’s falling close to the camera. Snow falling and covering the ground in the distance would require lots of additional materials and equipment, so it’s typically achieved with digital VFX. Often shots are a mix of both, with actors close to the camera; practical falling snow in the foreground; and all digital in the background.
The popular notion is that 10,000 hours of practice are required to become an expert at most things. It's only after a lot of years working with visual effects on the computer that you really begin to visualize why something would be more expensive in post if it is shot a certain way. Hiring an outside VFX Sup is a good solution to help keep the cost of the entire film down, but our favorite projects are the ones our team shepherds from pre-production all the way through post — but the reality is that many productions don’t even talk with the VFX team until well after principal photography is finished. And that’s ok.
Originally published: 04/19/2022
Credit Check
IMDb has become an industry standard-bearer. Here’s one mistake that can cut someone’s credits in half.
It was a dark day 20 years ago, when I made this stupid mistake. One that comes back to haunt me over and over again — and the worst aspect is that I know it will always be there.
I’m talking about the time I misspelled someone’s name for the credits roll on a feature film. I had gone according to payroll records, under the assumption that the names had to be correctly spelled. Little did I know that this crew member had been receiving checks under the incorrect name and he experienced no trouble from his bank.
As I’ve written before, credits were a big deal before there was ever an IMDb. They serve as proof of an actor’s or crew member’s experience. Since IMDb became a major factor, ensuring the correct spelling of every single person’s name has become even more important.
Mechanism Digital once worked with a woman who was born in China with a traditional Chinese name. As her career blossomed, she began to use an Americanized name most of the time. That left her projects divided under two different names. A situation like this might also happen if people use a nickname or a shortened version of their name. The takeaway is to make sure you use the same name with IMDb at all times, or else you won't end up in a single search; the longer your IMDb page is, the more “cred” you have in this industry.
Name spelling is not the only thing to double-double check. Getting the job titles right is equally important. In our work, the titles are often given to us by the studio — and sometimes we can give ourselves the title.
The crew member whose name I misspelled didn't get too mad at me (or, he hid it well). That was a relief, considering that the final cut of a movie cannot be changed, and changing a name on IMDb takes a surprisingly large amount of patience and time. Since “The Mistake,” he's grown into a very successful animator, and I like to think I didn’t affect his career in a big way — but it still haunts me to this day because it could have been avoided.
If you're unsure of how to spell someone's name, or even if you are sure, always ask them for clarification.
Originally published: 02/25/2022
The Voice of Experience
VO recording sessions are easy once the right pieces are in place.
At Mechanism Digital, we do a lot of voiceovers for corporate communications videos. These could be explainers, medical videos, promotions for conferences, or corporate-sponsored research. These videos are typically just a couple of minutes long and are driven by voiceover as well as text on screen.
Some of the questions for clients and agencies to consider at the outset include:
- Should we use a male or a female voice?
- Would a foreign accent help this stand out?
- Would it be a voice of authority or a more nurturing voice?
- Is there much jargon? Does the video require talent who can pronounce complex medical terms like borborygmi
After getting a feel for the kind of video the stakeholders want to produce, we access the roster of voiceover talent with whom we have an established relationship. We’ve built these relationships over the course of 25 years. It’s also helpful to have a relationship with some recording studios that represent talent. Certain people are very comfortable reading Latin and Greek medical terminology, which makes the recording session go faster. It’s usually a bad idea to go with standard radio voiceover announcers because they might stumble on words like duodenum (which, to be fair, probably has too many pronunciations).
The next step is introducing some voices to the stakeholders by way of demo samples — which are basically a collection of audio samples from various voiceover artists. Usually these come directly from the websites of the voiceover artists. At this point, some agencies will confer with their client, while others are empowered to make decisions. Mechanism Digital can work either way because our work flow prevents backtracking.
Executing the client’s creative vision is relatively hassle-free, thanks to advancements in technology making it possible for voiceover artists to record high-quality clips at home. All it takes is a small space, like a closet, with a microphone and a lot of padding. This transition was taking place even before the pandemic, so the kinks are mostly worked out of it.
To conduct an audio recording remotely, the audio engineer will establish a special high-quality direct connection using Source Connect or ipDTL and provide access to a client “bridge.” It’s very important that the client attend the session via a phone patch in order to make sure certain words are pronounced correctly, including the name of medical conditions, the product’s name, or the overall sentiment of the narrator’s voice.
Once all the interested parties are gathered for the recording session, the real work begins. In my next post I’ll talk about how we “keep the beat” during production and the persuasiveness of a great voice.
Originally published: 01/21/2022
Trust the Process
Making tough decisions about your animation is easier when you’ve got a proven process to follow.
Creating an animated video with a long-lasting effect is no easy task, but it doesn’t take a Hollywood budget to accomplish. Effective and streamlined procedures allow studios to bring big production values to more and more businesses. So how do studios do it?
Our phases of production are:
- Creative brief
- Mood boards
- Style frames
- Storyboards
- Animatic
- Motion
- Finishing phase
Every project starts with a creative brief. That involves sitting down with the client and listening to a description of both their brand and what this specific project needs to accomplish. The studio’s job is to ask questions and use active listening, to understand the client’s strategy, and how they want to present their product or service.
Clients usually have some rough ideas about what they're thinking creatively, and it's our job to take lots of notes and ask lots of questions. Back at the studio the creative director and producers gather research on the product/service — but also a lot of the competitors. This helps to understand the market and brainstorm ideas, and boil down what can work well for this particular client’s goal.
The creative team will develop a set of mood boards, which are often composed of images grabbed from web searches, previous projects that are similar, and images from competitors. Mood boards are basically several interpretations of the creative brief, given back to the client and presented in a visual way. Subjective terms like what constitutes “edgy” or “classic” are clarified between the design team and client. These mood boards inspire fonts, color palettes, and overall esthetics, then are presented to the client, with the primary purpose of triggering feedback. Often, it's more important to hear what they don't like rather than what they like. Constructive criticism gets everybody on the same page about images, colors, and visuals. Up until this point, the project was all words to be interpreted.
If a style frame is done effectively, it should look like an image was pulled from the final product. If it's a TV show, it's often the logo and the final landing frame. If it's a commercial explaining the inside of a printer, we'll present views inside the printer that have a very clear explanation. Instead of a printer, it could be a medical explainer. If it's a medical explainer, it would be one or a few key frames of the explanation and how matte or glossy the final animation will look.
Another phase — which could also be happening in parallel — is the storyboard phase. Storyboards are thumbnail sketches of each key point in the story, sort of like a comic book. The storyboards typically go through a couple of iterations. Design teams may drop some story frames that are not necessary or add new story frames if a concept needs more explanation. Often these have text descriptions or arrows suggesting the motion or voice-over copy from the script. We can all look at these storyboards and make sure that we're telling the entire story and it's serving the purpose and strategy.
Once the storyboards are approved, we'll move to the animatic phase. The animatic is now the video blueprint for the rest of the piece which is set to the timing of a scratch voice over track. This gives us a sense of timing to make sure that each key point is on the screen for enough time, and that the shots are long enough for the audience to understand the concept, while also not being too long and boring for the audience! If it's a 30-second spot, the animatic will also be 30 seconds. The creative team then shares the animatic with all of the stakeholders. (If it's a medical animation, this is also the point at which it gets submitted for legal approval.)
Once the animatic is approved by the stakeholders, then we move into the motion stage. The motion stage is when animators create motion in a 3D-animation software, using 2D animation, or stop motion, and everyone can actually see objects move. The process involves creating motion for each of the story points and as the project progresses, each of the storyboard frames is replaced by motion, till the entire project is in full motion. As a first or draft pass at motion, the images don't have a lot of subtleties or reflections and shadows. The colors may be dull, there's a lack of detail, but it's important the client reviews and approves the motion before added detail, secondary motion, and the project gets more polish and refinement. Then the motion is approved for the finishing phase.
The finishing phase consists of color correction, high-resolution rendering, and the addition of effects like lighting and shadows. At this point, it's about making it beautiful, and not going back and changing any of the earlier phases. This is when the final voiceover is recorded and edited back into the piece, any music is added, and sound effects are added to round it all out
I should mention each of these phases is specifically chosen at points where the client needs to make a decision to suggest revisions or approve before moving on to the next phase. For example, the goal of the mood board presentation is to get the client to choose an esthetic direction — or, sometimes, a combination of esthetics. Having deliberate design phases prevents having to go back and change one of the previous phases. Which would be called a “change in direction,” and that is costly. The whole point of our process is to avoid any changes in direction so that we're always working in a forward motion and not having to go back and redo work costing additional money.
The final delivery often requires different formats if it's going to be posted in different places. If it's going to be on an HDTV; streaming on YouTube; presented in a square frame for Instagram, or an odd Facebook size — it’s important to know the different shape outputs from the beginning so we don’t cut off important information like characters or on-screen text.
This collaboration between the client and the creative team eliminates surprises. The client understands their product, and the creative studio understands the design and animation process. Together, the two teams hold the key to producing the best possible product for the client’s goal.
Originally published: 11/23/2021
From Hollywood to Healthcare
Bringing the best of both worlds to….both worlds.
Working with movie studios for 25 years has taught us how to hone our narrative storytelling skills in order to boil a story down to its elements. This allows us to advance a plot in a 30-second timeframe, as is commonly seen in television commercials.
By contrast, Hollywood films benefit from a longer run time and a longer form of storytelling that allows there to be a story arc over a period of time.
Corporate and medical communications traditionally chose FX studios that specialized in short-form storytelling — but then the digital revolution happened, bringing with it a panoply of venues for advertising, as well as neat technological innovations that people can try in person. New, longer-form content was needed to make use of these new venues, like virtual reality headsets.
The medical industry often educates and engenders empathy using video case studies as a time-honored way to help healthcare providers appreciate the impact of a medical condition on daily life. Virtual reality is an excellent tool that allows us to put the participant in a patient’s 360-degree point of view. These experiences complement the treatments for conditions — and put healthcare providers in the shoes of the patient to elicit empathy. The soundtrack or the voiceover can be used to simulate the patient’s inner monologue, which further places the audience in the patient’s shoes.
A VFX studio that works in both of these industries can leverage its complementary experiences.
This point was made clear to us during a presentation we recently made to an advertising agency. The agency had traditionally relied on studios that specialized in short-form storytelling. We were billed as some big Hollywood movie-making company that does visual effects and CGI — that also works with the medical industry. They were very excited about what we were going to be bringing to the table.
While presenting their team with a series of before-and-after VFX shots from our recent work on the film Uncut Gems, we talked about how the storytelling experience can be leveraged for marketing projects. It is exciting to bring Hollywood to agency clients’ projects, which in turn helps us all engage more enthusiastic audiences.
Everyone who is making their own content and media dreams of being up on the Oscars stage. We share in that excitement and are eager to bring our experience to the table for all types of projects.
Originally published: 09/24/2021
The Wisdom of a 25-Year-Old
Lessons learned in 25 years.
25 years — is a long time in “computer years”. Our digital studio has learned a lot in the last two and a half decades, and I hope that others can learn from my mistakes. To that end, here are four “lessons learned” I’d like to share with the benefit of experience and hindsight:
A five-minute conversation can easily save $5,000 on set. There are a lot of decisions to be made on a set which influence how the filmmaking process is going to be handled in post-production. It's best if producers can bring us in early in the process. Some options we can suggest may be more or less expensive, and some options have more or less flexibility later on if filmmakers decide to change their mind about a creative direction. If producers call us up before a project, we're always excited to brainstorm and talk about the options they have, or answer questions about how effects should be handled “in camera”.
We hire for creativity because it's easier to teach technology. Over the years, I've found it makes a lot more sense to hire for creative passion because we can’t train that characteristic in a person. We’re better at teaching our creative artists and producers technology, rather than the other way around.
Early on, I was intimidated by the design process. It was almost as if designers had magical powers. As my career progressed, I learned that it's actually a methodical process that involves close collaboration with the client. The client understands their brand and product, and our team uses Mechanism’s deliberate, multi-phase process for boiling down and teasing out the intersection of their brand and goal of the campaign. Together we hold the key to uncovering the best answer to the puzzle.
I wish we had encouraged artists to work remotely before the pandemic. Now that we've lived the remote lifestyle, I’ve come to appreciate the new life that's been breathed into our staff as a result of working from home and spending more time with their families.
I always assumed our shop would eventually be remote at some point in the future because we don’t really need to be in the same room to produce a digital deliverable. It just all happened a little sooner than I thought thanks to the pandemic — and we see the results.
Using cloud technology saves money. Business changes with the seasons. Sometimes we have five artists working, and sometimes it’s 25. The cloud provides flexibility that allows us to ramp up with the fastest, most expensive computers for producing and rendering — and we can shut them down without having to maintain these machines while they’re idle.
When I started in this business 25 years ago, one had to be a computer programmer to be in this business, but now creatives can leverage digital tools to tell stories much easier — without having to know as much about how computers work. The result is the best of both worlds. Looking forward to the next 25!
Originally published: 08/12/2021
Tribeca Film Festival Double Feature
We’re thrilled to see two of our films at Tribeca this year, and look forward to seeing you there in person.
Congratulations to all cast and crew with films premiering in the 2021 Tribeca Film Festival. Mechanism Digital is very proud to have two features in this year’s festival, Catch the Fair One and Werewolves Within, in which our NYC studio supervised on-set VFX, designed, and executed the Visual Effects for both films.
Catch the Fair One had its fair share of gun-play muzzle flashes and blood splatters throughout its run time — one gory shot even required us to blow off half of someone’s face with a shotgun blast. Werewolves Within required hundreds of shots to have snow added to the ground. A warm winter was not supportive of the storyline of an avalanche trapping the cast in a mountain lodge. The werewolf character required transformational effects and wire/rig removal effects in order to give it the ability to run up and across walls.
Working remotely in 2020 had its challenges including limited access to the studio, as well as having to pivot our means of team/client communication. Autodesk’s Shotgun creative project management software served as an invaluable tool for tracking director’s comments and delegating tasks to our remote artists — which included green screen keying, rotoscoping, CGI bullets, blood, wolf elements, and falling snow. Given the frustrations of lockdown, the team welcomed these exciting projects and gave their all to help make these films shine. We wish success to all the films at Tribeca and look forward to seeing our beloved film and TV industry back at 100%.
Since 2002, the Tribeca Film Festival has heralded the arrival of summer in New York City. Founded by Robert De Niro, Jane Rosenthal, and Craig Hatkoff, hundreds of films are screened each year — attended by thousands of people from around the world.
Originally published: 06/05/2021
Be My VRalentine
VR360, virtual reality, VR, 360, Valentines, CGI, VFX, Animation, Film, Television, TV, Marketing, Interactive, Augmented Reality, experiential in New York Ad Agency's and production companies.
For Valentine’s Day we’d like to share this floaty experience we produce for our friends and loved ones.
Check this link to transport yourself into a lighter than air scene. Desktop and mobile devices are fine, but VR headsets yield the full 360 experience. Go full screen and 4k!
VR360 can be quite technical, especially in all its flavors, but that doesn’t mean we can’t use it to invoke sweet emotions. As media producers, it’s important we focus on the final sentiment we want to convey and not get caught up in using technology for technologies sake.
Mechanism Digital’s team has been producing spherical content for about two decades and I have very much enjoyed pushing the evolution 360 media to its current forms. In the beginning we produced spherical imagery with the use of nodal tripods allowing a few dozen photographs to be shot one at a time and placed into an array that share the same no-parallax point or “entrance pupil”. In other words, the camera was carefully rotated between each shutter release to capture every angle of the environment from a single point in space. These photos would then be “Stitched” together in or workstations software resulting in a single file to be viewed in QuickTime as a QTVR file or even some Java formats for web allowing the viewer to pan around the scene in all directions including up and down.
The last few years the media and hardware has made great leaps into the motion/video world allowing action to happen all around the viewer by shooting with multiple video or film cameras. One of the challenges with video is multiple cameras (typically 2 to 16 cameras in a rig) can’t all use the same exact no-parallax point as the laws of physics still state that two objects (or cameras) cannot occupy the same space at the same time. The bigger the cameras, the further apart the lenses are physically mounted which causes double vision as objects get close to the camera rig. The challenge has been to use smaller cameras smaller without compromising quality.
As a VFX and digital animation studio, Mechanism Digital takes advantage of working “virtually” in the computer where we can actually have all camera at the same center focal point at the same time and therefore render mathematically perfect spherical images every time and we don’t even have to paint out the tripod! Objects can be close or far without any parallax problems and can easily export stereoscopic content which requires double the number of cameras. 360/VR Computer Generated Imagery (CGI) video is perfect for taking your audience where you can’t put a camera. We use CGI for medical education inside the human body at the microscopic scale, architectural visualizations, or for telling stories with animated characters and talking animals. For projects requiring live action we regularly use creative techniques to combine real actors and CGI elements taking advantage of the best of both worlds. Visual effects and graphics can be added all around the viewer as well which definitely adds a new dimension to storytelling.
For the Valentine’s Balloon project we used our proven mulit-phase design process which includes mood boards, style frames and even 360 storyboards to nail down the overall look, feel and animation of the experience. Our 3D balloon models were modified and painted/textured by our artists but we decided not to animate them by hand, but rather use the random precision of particle motion with Maya’s dynamics simulations to give the balloons a realistic wobble and rotation. The balloon geometry was parented to these individual particles and cached out to render on our render farm without having to perform particle run-ups for every frame.
The clouds required the most research and experimentation time to develop their look. Our lead artist and designer Fangge Chen spent a couple days testing the shapes and working on ways to bring render times down as Valentine’s day wouldn’t wait for the long render times that are common with smoke simulations. We settled on settings that looked great with for wispy edges and would calculate on our render farm in time for V-day.
We decided not to produce this project in stereoscopic as the format was not supported by all devices and we wanted the experience to be viewed by as many friends and colleagues as possible with and easy distribution. You can view on a standard computer with click/drag around to look in all directions, but the most dramatic effect is when using a headset like the Oculus Go.
The project was fun for the whole studio and pushing the envelope further always adds to our experience which we apply to future client projects. Even with the additional render time and technical needs that come with VR, we are excited about many upcoming projects we are doing this year.
Originally published: 02/14/2019
5 TIPS to create 360° videos
Top tips to start shooting successful 360° videos
By: Michaela Guzy & Lucien Harriot
So why would you want to create a 360 video anyway?
We all know that a video is more engaging than a static photo, but imagine being able to share with grandma, the kids and the neighbor what you actually experienced on vacation.
Since YouTube, Vimeo and Facebook have all launched 360 viewing platforms on desktop or handheld devices, we can literally have a look around, without those crazy looking goggles. We don’t think 360 will ever replace the transformative experience of actually visiting a new destination with all the sights and smells, or authentically connecting with local people, but it’s about as close as you can get to bringing the destination to life for your audience.
In this article and accompanying video tutorial, we are going to share our TOP 5 tips for documenting your next vacation in 360°.
You will learn how to:
● Choose the right 360 gear for documenting your next trip
● Understand the difference of 360 resolution vs. HD
● Prevent motion sickness
● Position your 360 camera and yourself
● Share your memories with your friends, followers and social networks
So who are we? And why would you listen to us in the first place?
● Michaela Guzy, Founder & Chief Content Creator of OhThePeopleYouMeet, a website and video series for travelers, foodies & philanthropists seeking authentic local connection everywhere they journey.
● Lucien Harriot, VFX Supervisor and Executive Producer of Mechanism Digital, a visual effects and VR studio in NYC.
Let’s have a look around, shall we?
#1 Beginner 360 Gear: Choosing the option that’s best for your needs
There are several user friendly high quality 360 camera options in the market for the novice:
- Ricoh Theta S: is a good value at $370 USD, but shoots low resolution, meaning the quality is fairly low and video will be blurry when seen on YouTube or in a headset
- Nikon Keymission 360: is $500 USD, and shoots high quality images and video, but the app can be difficult to set up
- 360fly: is $500 USD. 360fly only has one lens, making it inexpensive but there is a large circle area at the bottom which the camera won't capture, so it’s not completely immersive
- LG 360: is $200 USD and great if you just want to test the 360 waters, although the picture and video quality is very low
- Samsung Gear 360: has a new model and at $190 USD is probably the best value for money with a great price, ease of use and high quality video output
The good news is, with each of these consumer options, everything is automatic and always in focus. Many of these cameras are water resistant or even waterproof, so that might influence your decision. If you are planning to do a narrated piece, note that audio from camera will pick up a lot of environment or background noise, so choose a quiet setting or for a more advanced creator, consider using external lavalier microphones (LAVs).
#2 360 Resolution
Although HD is the standard for your TV at home and many computer screens, in 360 1080/HD is not enough resolution and even 4K is low. Your standard TV screen only takes up about 20-25 degrees of your view, so when we need 360°, it would require about 16 monitors around you to keep the image comparable to the sharp HD quality we are used to. Think of it like this, the captured image needs to be stretched around your head which means there aren’t enough pixels to see a crisp clear image.
#3 Reasons we don’t suggest handholding 360 Cameras
Your arm will look all funky and distorted. We have an illustrative example in our video tutorial.
- You’ll also end up moving the camera which can make the viewer sick
- Don’t pan or tilt 360 cameras, that the viewer’s job to look around in VR. We have demonstrated in our video…see what we mean??
- With every rule, comes an exception…
- Shooting in a moving car or boat can work fine as humans are used to seeing straight forward motion while sitting
- Whereas a VR point of view, sitting on a roller coaster will probably cause the viewer to throw up. The nausea comes from the disconnect of what your eyes see and what your equilibrium feels
- Your low-level brain sends out a warning that you probably ate some bad sushi and you know how that ends up...refer to our video for some illustrative examples
#4 How to set up your 360 camera
- 1) THE STAND: Do use a monopod/unipod with small legs to minimize the stand can viewers see in the shot. Set to a human’s line of sight or eye height. Refer to our video to see how a low shot appears and look down to see the stand.
- 2) PLACEMENT OF THE STAND: Place camera in the middle of your scene so the viewer feels like they are part of the action, not watching from the sidelines. Although, for sports, sitting on the sidelines may be best to the camera doesn’t get run over by the players
- 3) LIGHT SOURCE: Angle side of camera toward main light source (sun) between lenses, especially with two lens cameras we listed (except the 360fly). If lenses have drastically different amounts of light/exposer for each lens, there will be a noticeable line between the lens stitching.
- What is stitching anyway? These two lens cameras are actually two cameras back to back and the stitching process, either in the camera or software that comes with the camera is the process of combining two 180 degree images into one seamless 360-degree photo.
- If possible, shoot outside with available daylight as 360 cameras don’t have a flash and dark scenes will demonstrate noisy compression artifacts you can see on the floor in our video.
- 4) NARRATION: If you are in the shot, vs. hiding for a scenic landscape... be the action, you are the action. Look at the camera and smile or talk and tell us a little bit about where we are. Point out notable things in the scene with your hands while also giving verbal narration as the viewer may not be looking in your direction while you are pointing. You might even walk around the camera (remember don’t move the camera) as you talk about things in different directions. Experiment with standing next to the camera and ignoring it, and then test a shot to see how it looks when you are speaking to the camera. If you look directly at the camera, the viewer will feel you are looking at them while you talk.
- 5) DISTANCE: We also suggest pre-testing your shots to see how far away from the camera you want to appear. And different cameras require different distances away from the stitching edges to avoid distortion.
- Don’t get too close to the side of the camera between the lenses as the stitching process can distort near objects and you might see your nose disappear
- The larger the camera bodies (or the further the lenses are from the center point) the further you need to stay from the entire rig:
- Small cameras like Samsung Gear or Nikon Key Mission are “ok” at approximately one foot away or further from subject matter, but test to make sure. Its best to put your subject in front of the front lens
- Larger rigs like go pro multi-camera mounts may need four feet of safety. Directly in front of a lens is "ok", but if one part of your body is in front of a lens, there is a good chance that another part of your body is crossing between adjacent lenses
- IF YOU DON’T WANT TO BE IN THE SHOT: Use the app to start filming or monitoring the action remotely
- IMPORTANT NOTE: 75% of what a viewer looks at is the front 90 degrees of their view straight ahead. When a viewer first gets into a scene, they may look around to get a sense of their surroundings, but after that they mostly face forward unless they are bored or you indicate something…“over there” or “to my left you see”. You can direct a viewer to look around and even look behind them, but considering most viewers are sitting down in a chair or the sofa, looking behind them or to the side for considerable amounts of time can be uncomfortable and most of the content should be forward. Refer to our video to “look around”.
- #5 Sharing is caring
So now you’ve captured amazing 360° content from your trip and it’s time to share with your family, friends and social networks. So how?
- If you recall from the beginning of our video, Facebook, YouTube and Vimeo all have 360 platforms, so you don’t need the googles to view 360 degree videos anymore
- Whichever platform you choose, you need an account and to download that app for whichever device you have connected to your 360 camera
- Facebook is the most user-friendly place to share 360 videos and photos
- YouTube is easiest if the video is more than a few minutes and a larger file
- Vimeo now supports 360 and is known for the best quality (in other words there is the least amount of compression) but is not socially shared as much at Facebook & YouTube
- Facebook and Vimeo have a nice feature which allows you to set your starting angle. YouTube chooses via their own algorithm
- Upload instructions on the platforms are easy to use. Read the step by step instructions for each site (this is getting easier and more automated all the time)
Don’t forget to subscribe to Mechanism Digital on YouTube for more VR videos and check us out on LinkedIn and Facebook.
Once you graduate from our beginner course, check out our intermediate and pro tips too.
Journey on!
Originally published: 08/23/2017
VFX Helps Keep THE DETOUR in NY
We love this industry and we love this town. Thanks NY!
VFX Helps Keep THE DETOUR in NY Running a visual effects company is such a pleasure, from the challenge of developing something an audience has never seen before to learning and using ever-changing technologies. Although one aspect of VFX that gets me down is producers concern about how expensive effects can be. It always feels good when we are contacted with the intention to save money by utilizing visual effects wisely.
Recently the team at Mechanism had the pleasure of working on the hilarious TBS series The Detour created by Jason Jones and Samantha Bee. Directed and produced by Jason and Brennan Shroff, the what-in-the-living-hell-is-wrong-with-this-family script called for our ill-fated family to travel from NYC to sunny Florida to tropical Cuba and somehow end up in a frozen mountain town near the arctic circle.
The strategic decision by the production manager, David Bausch, to shoot in Long Island helped to save substantial budget monies by avoiding travel and lodging and keeping the show “Made in NY”.
Creatively making due comes with its challenges but with David’s careful planning it paid off. Episodes needed to be shot out of order, as story’s the trip to Cuba wasn’t till the last few episodes which were slated to be shot in December, but that would be too cold for drifting out to sea on a raft or walking through the Cuban surf. Shifting these sequences several months ahead created many exterior location benefits. To pull off the NY-for-Cuba shoot our team directed by our lead digital compositor, Fangge Chen, used clever visual effects and lots of rotoscoping to change Long Island Sound’s brown water to a beautiful Caribbean blue.
A private island sequence was shot in the courtyard of Vanderbilt’s beautiful Spanish style summer mansion in Centerport NY. We VFX Supervised alongside the fire department to oversee a sequence where a 50-foot statue of Saddam Hussein was to be blown-up with dynamite. I hope we didn’t waste the fire department’s time as the entire explosion was created in post and a CGI replica of the statue was shattered through computer simulations and composited fire.
Keeping the production in New York made it possible for the directors to easily jump back and forth into the edit at Jax Media for in person reviews as opposed to having to settle for phone discussions which can often lose something in the translation.
Additional visual effects included an exploding cow, ocean extensions, rig removal and turning a beach motel into a frozen tundra outpost. The network needed the last few episodes delivered a few weeks early and found a few more water shots to make blue. No problem, we reallocated several artist to the project and cranked the VFX shots over the finish line.
It’s a great month when we know we kept money in our home town, saw our craft on the screen and best of all made people laugh. We love this industry and we love this town. Thanks NY!
Originally published: 05/04/2017
Floating in the clouds with Virtual Reality 360 Video
Love is all around you
Our studio wanted to create something classy for Valentine’s Day while having fun with the latest technologies. Some brainstorming and there was no question, we would float with our friends and loved ones in the clouds with balloons
VR is a huge buzz these days with the Cardboard in 5 million user’s hands and Oculus and other hardware about to be released. Mechanism Digital’s team has enjoyed producing spherical content for about 15 years. In the beginning we created imagery with nodal tripods which allows an array of multiple (usually 14 to 38) photographs to be shot in all directions (not at the same time) placed at the same no-parallax point, also known as the “entrance pupil”. The camera is carefully rotated between each press of the shutter to cover all POV angles of the scene. These photos would then be “Stitched” together using software to make a single file to be viewed in QuickTime as a QTVR file or even some Java formats for web allowing the viewer to pan around from a single point to see in all directions including up and down. The last couple of years the content has made great leaps into the motion/video world allowing action to happen all around the viewer. One of the challenges with video is multiple cameras (typically 2 to 16 cameras in a rig) can’t all use the same no-parallax point as the laws of physics still state that two objects cannot occupy the same space at the same time. The bigger the cameras, the further apart they need to physically mounted which causes double vision as objects get close to the camera rig. This physical size problem has made GoPros popular because of their small format, allowing subjects to get within 12 inches without major problems. In addition to the parallax problems, camera lenses naturally have different distortions in their curvature which can also make stitching images together seamlessly a problem.
As a VFX and digital animation studio, Mechanism Digital takes enjoys working in the computer virtually where we can actually have all camera at the same center focal point at the same time and therefore render mathematically perfect spherical images every time and we don’t even have to paint out the tripod!. Objects can be close or far without any parallax problems and can easily produce stereoscopic content (stereo a whole other compromise I’ll save for another post). Computer generated 360 VR Video is great for taking your audience where you can’t put a camera. We use it for medical education inside the human body at the microscopic scale, architectural visualizations, or for telling stories with animated characters and talking animals. Most live action 360/VR video is for putting the viewer in a physical environment with real actors, and there are creative techniques to combine live and virtual cameras for VFX or even broadcast graphics.
For the Valentine’s Balloon project we used our standard phases of production/design including mood boards, style frames and storyboards to nail down the overall look, feel and animation of the experience. In most 360 projects it is important to create storyboards for all views including front, left, right, back and sometimes up and down. For this project we only had to plan forward and downward views as the content was similar all around the viewer.
Our balloon models were modified and textured in Maya 2016 where most of the work was performed in this project. We used particles with random dynamic motion for the balloon’s wobble and rotation. The balloon models were parented to particles and cached out to render easily on our render farm without having to perform particle run-ups for every frame.
The clouds required the most time to figure out and develop their look. Fangge Chen our lead artist/designer started developing her own clouds using Maya Fluids, but ultimately we found and were very happy with the EMFX Clouds script which can be downloaded from Creative Crash. Fangge still spent a couple days testing the shapes and working on ways to bring render times down as Valentine’s day wouldn’t wait for the long render times which were anywhere from one to five hours per 4K frame. We settled on a quality that looked good for wispy edges and would calculate on our farm in time. One sacrifice for the clouds was to bake/freeze the simulation avoiding the need to run-up for each frame, although this means the clouds would not seethe over time.
With our looming deadline we opted to use the Domemaster 360 Mental Ray lens which reduced render time and avoided the LatLong conversion although it does have a thin seam on left and right if you look closely. See Andrew Hazeldon’s blog for many useful VR tools for Fusion, After Effects, Nuke, Maya. For most client productions we use our custom in-house six camera Maya rig and composite the six 960 square images into one perfect equirectangular frame (AKA LatLong) using the Domemaster’s plugin for Blackmagic’s Fusion. This technique produces arguably better results and is more flexible, but the additional step after the 3D render requires considerable time adding another minute per frame in animations which are often 1800 frames per shot.
We decided not to go stereoscopic for this project as it not supported by the iPhone on YouTube and wanted the experience to be viewed by as many friends and colleagues as possible with and easy distribution.
You can view on a standard computer in Youtube and click/drag around to look in all directions, but the most dramatic effect is when using a headset like the Cardboard or Samsung Gear VR viewer. The Samsung Gear is like a high-end Cardboard which is more comfortable and keeps the light out from around your eyes but can only be used with Samsung Phone. The iPhone currently isnt’ working on YouTube with the Cardboard, but you can still look around in the YouTube app on the iPhone (and other phones/tablets) as if you are peering through a window into another “virtual” world.
The project was fun for the whole studio and pushing the envelope further always adds to our experience we can apply for future client projects. Even with the additional render time and technical needs, we are excited about many upcoming projects in 360/VR!
Check out Mechanism Digital’s page for the Valentine’s Balloons and several other 360 VR video experiences our studio has produced:
Originally published: 02/15/2016
The heartCam Story (how to set your app free)
A Journey of Innovation: Unleashing Potential Through Technology and Creativity
In 2008, mobile apps on iTunes became the next hot thing, and we all began to hear stories of people becoming overnight millionaires with simple apps. I wanted my company, Mechanism Digital, to get into the game by using my studio’s abilities in 3D animation and visual effects to get on board.
Researching the successful apps, we saw they ranged in complexity from simple gags (like Koi Pond or iBeer) to more complex apps using existing game code or leveraging the popularity of established intellectual property (like Crash Bandicoot or Google Earth). To start, we decided to produce a couple of simple apps to learn the technology and aesthetics and possibly even go viral with an original idea. Over the next few months, we produced several apps as freebies to test public reaction. Our apps included iBoom, iBlink, and iBreathFire which taught us a great deal about developing in X-Code and the iTunes approval process but, alas, they did not go viral
Over the next couple of years, the studio landed several good-sized contracts producing enterprise (private) apps for Medical Education Agencies, an area in which Mechanism Digital excels due to a wealth of expertise in creating animated medical explanatory videos. We also started incorporating other fun technologies, such as Augmented Reality and real-time 3D rendering engines like Unity. Augmented Reality uses an image or text “trigger” to instantly display an image on a mobile device screen that overlays the “real world”. We decided to create an app which was a bit more sophisticated and fun, to see if we could finally go viral and make millions!
In the next few weeks, using our efficient five-phase design process (mood boards, storyboards/wireframes, style frames, motion/layout, and finishing), we developed HeartCam, an app which creates the unique, and dramatic effect of peering into someone’s chest to see their beating heart. Since in-house creative decisions are made in real time, as opposed to waiting for client review, projects can move swiftly and used only ten days of labor from design to final build.
Once it was submitted and approved by iTunes, which takes about a month, we offered the app free of charge in order to garner heavy downloads and immediate feedback via the reviews. We found that some people were confused about how to use the app’s external AR marker, so we went back and spent a day or two developing some clear instructions. We then tested the new instructions on the streets of New York, using complete strangers as external consultants. Happily, iTunes’ approval time is much shorter on submitting revised apps.
Now we were getting great reviews and seeing 100 downloads a day which was exciting to watch after all the hard work. A couple weeks later, we decided to switch it to a paid app and watch the money roll in. We set the price at .99cents and watched the number of downloads plummet! With just two sales each day for two days, a total income of four bucks, we realized it would take almost thirty years to recoup our costs! I made the decision to see how far the app could spread if we just set it free.
Three years later, we have had over100,000 downloads, seen it demonstrated at conferences like SXSW, and mentioned in digital advertising articles.
Our staff is very proud of the app as we all had creative input on its design and got the chance to learn from the experience. We also get to show it off as a portfolio piece without conflict of client – most pharma projects don’t allow portfolio usage.
The app continues to be a highly effective, non-proprietary portfolio piece and very useful in opening doors at pharmaceutical and medical education advertising agencies. Several agencies have since hired us to create about a half million dollars in augmented experiences (mobile and large scale). These sales aids and educational experiences are terrific in trade show booths as well as giveaways at marketing events.
Developing the heartCam was well worth the investment for us and we continue to look for new technologies to learn and share with clients.
Mechanism Digital often receives inquiries about new technologies to set our clients apart from their competitors. It’s exciting to be thought of as a technological thought leader. Currently, we are working on 360° stereoscopic virtual reality videos to be viewed in Google Cardboard.
Tech is moving so fast, it’s fun to imagine what we’ll be inventing next year!
Originally published: 07/08/2015
The Void takes virtual reality to the next level
4D environmental FX to create virtual environments
Utah theme park develops virtual worlds built over physical environments with "4D" environmental FX to create the most immersive experience yet.
Originally published: 05/12/2015