Pixar's success is truly a combination of art and science! During a recent visit to the Carnegie Science Museum in Pittsburgh with my daughter, we were captivated by the Pixar exhibit. Seeing the advanced technology and creative artistry that go into Pixar's films inspired me. It got me thinking about the parallels with digital transformation in manufacturing and how a blend of human elements and technology is crucial for success in both fields. #ai #data #digitaltransformation #bigdata #manufacturing #smt #smta
Gabriel Moreno’s Post
More Relevant Posts
-
Stepping into the Future with Fei-Fei Li's World Labs! 🌍 Imagine turning a single photo into a fully interactive 3D world. Fei-Fei Li's new company, World Labs, is doing just that by introducing spatial intelligence to generative AI. Their system transforms static images into dynamic, explorable 3D environments, all rendered live in your browser. Why this matters: 📽️ Game-changing creativity: From gaming to filmmaking, this tech opens doors for creators to bring worlds to life faster than ever. 🛠️ Accessible design: Artists and engineers can now build with interactive depth and realism, reducing costs and development time. 🔮 A glimpse of tomorrow: As these systems evolve, the possibilities for simulation, training, and education are endless. While it’s an early preview with some limitations (think small explorable areas and occasional rendering glitches), World Labs’ vision shows incredible promise. With $230M in funding and a mission to change how we interact with digital spaces, the future of world models is here. Read more here (and play with their demos): https://lnkd.in/eAEdthNV #ArtificialIntelligence #GenerativeAI #SpatialIntelligence #WorldLabs #FeiFeiLi #3DModels #FutureOfTech #Innovation #AIForGood #TechForCreatives
To view or add a comment, sign in
-
Day 3 of Thinkmas 🎄🧐 Hyperreality - Jean Baudrillard On the third day of Thinkmas, we look at Jean Baudrillard’s concept of hyperreality. This suggests that our “reality” is increasingly a copy without an original—think advertising, social media, and virtual experiences that feel more “real” than real life. Hyperreality spotlights a world where representations—images, media, simulations—become more “real” than direct experience. As generative AI produces ultra-realistic images, voices, and text, these insights remind us about the difficulty of authenticity and how we should maintain critical awareness amid increasingly convincing fabrications. Baudrillard’s work has been popularised in The Matrix, where his seminal work “Simulacra and Simulation” has a cameo (see image). It’s also Baudrillard who coined the phrase “The desert of the Real” which Morpheus used to describe the world in that movie. “We live in a world with more and more information, and less and less meaning” You can find the book here: https://lnkd.in/e3deHnMf #Thinkmas #Baudrillard #Hyperreality #MediaStudies
To view or add a comment, sign in
-
-
Last week, our talented Disguise engineers working on the EMERALD Research Initiative shared huge progress. Using advanced AI, they added depth information to scenes captured by a single camera, drafting the next generation of virtual production tools. These innovations will enhance our award-winning 2.5D workflows and set the stage for fully 3D virtual assets, ready for pre- to post-production uses. Looking ahead to 2025, we can’t wait to roll out these capabilities and redefine what’s possible in virtual production. Learn more: https://lnkd.in/dUZvPjqh #VirtualProduction #AI #Innovation #Disguise
To view or add a comment, sign in
-
Exploring AI Video Creation: OpenAI’s Sora vs. Runway’s Gen-3 Alpha 🌟 In our latest video, we dive into the exciting developments in AI video creation, focusing on OpenAI’s Sora and Runway’s Gen-3 Alpha. Both of these cutting-edge technologies are pushing the boundaries of what’s possible, but how do they compare? Note that the OpenAI Sora previews were indeed curated, so it’s not exactly a fair comparison as I simply took the first attempt at that exact text prompt with Runway Gen-3 Alpha. Also note that if the character limit was hit for some of the extra long text prompts, I simply cut it off short. I still did my best, however, to incorporate some important keywords if they were stuck at the tail-end of the prompt. Like “35mm” for instance. If that was cut off, I’d try to reincorporate that into the Runway text prompt somewhere at the end. Key Insights: 1. Improved Realism: Both Gen-3 Alpha and Sora have significantly heightened realism, delivering more believable gait, anatomy, and movement. This makes AI-generated videos look and feel more lifelike than ever before. 🏃♂️ 2. Handling Physical Motion and Artifacts: While both models sometimes struggle with implausible physical motion, hand rendering, and occasional random morphing artifacts, Gen-3 Alpha has shown remarkable improvements in minimizing these issues. The movements are more fluid, and the characters’ interactions with their environment are more convincing. 🤖 3. Access and Availability: One major advantage of Runway’s Gen-3 Alpha is its availability via a subscription-based model, making it accessible to a broader audience of creators and developers. On the other hand, OpenAI’s Sora does not yet have a firm release date, suggesting that they are diligently working to refine and advance the technology before making it widely available. 📅 4. Technological Advancements: The advancements in Gen-3 Alpha are noteworthy, with better occlusion handling and temporal consistency. These improvements align closely with the performance of Sora, making Gen-3 a competitive option in the AI video creation space. 🌐 Stay tuned to see how these AI tools transform the future of video creation! Don’t forget to like, comment, and subscribe for more updates on AI technology and innovations. 👍 #AIVideo #TechInnovation #Filmmaking #MachineLearning #FutureTech #DigitalArt #VisualEffects #Innovation #Sora #OpenAISora #Gen3 #Gen3Alpha #Runway #Midjourney #AIPhoto #GenerativeAI #GenAI https://lnkd.in/g7Pdw2D3
OpenAI Sora vs. Runway Gen-3 Alpha - AI Video Model Comparison
https://www.youtube.com/
To view or add a comment, sign in
-
🚀 Just completed the DeepLearning.AI course on Prompt Engineering for Vision Models! 🎓 This course provided me with valuable insights and hands-on experience in cutting-edge skills like Owl-ViT, LoRA, Dreambooth, UNet, and SAM models. #ArtificialIntelligence #DeepLearning #MachineLearning #AI #ContinuousLearning A big thank you to the DeepLearning.AI team for this incredible learning opportunity. Onwards and upwards! 💡
To view or add a comment, sign in
-
🚀 Segment Anything Model 2 (SAM 2) is Out With Improving Visual Segmentation! 🖼️ Meta's SAM 2 is a groundbreaking foundation model for promptable visual segmentation in images and videos. 🎥 With its simple transformer architecture and streaming memory, it enables real-time video processing. ⚡ Key features: * Extends to video by treating images as single-frame videos 🎞️ * Strong performance across diverse tasks and visual domains 💪 * Promptable Visual Segmentation 🎯 Check out these impressive demos: * Tracking objects for video effects 🎬 * Segmenting moving cells in microscope footage for scientific research 🔬 * Handling complex, fast-moving objects 🏎️ #ComputerVision #AI #MachineLearning #VisualSegmentation
To view or add a comment, sign in
-
Will AI replace us? What is the future of AI in proposals and proposal graphics? What are 3 truths about AI in our industry? Watch now and uncover what I’ve learned.
🚀 Loose Change Alert! 🚀 Join Tan Wilson and graphics expert Mike Parkinson from 24 Hour Company as we discuss the psychology behind why customers buy and award contracts, the transformative impact of AI on proposal graphics, and the power of clear visual communication. This episode sets the stage for understanding the key elements that make a winning proposal. 🎧 Listen to this teaser now and stay ahead in the GovCon industry: https://lnkd.in/g5ATVvaZ #GovCon #ProposalGraphics #AI #T2C #entellect Alexa Tsui Ron Jones G2Xchange Rachel Jones Meryl Angelicola, CF APMP ProposalTeam ProposalTeam Alumni Sadaf Shukla Abigail Halder
To view or add a comment, sign in
-
Imagine having the power to create breathtakingly realistic images with just your words. FLUX.1, the latest AI image generator from Black Forest Labs, promises to revolutionize the world of digital art by transforming mere text into stunningly lifelike visuals. This groundbreaking technology, developed by the minds behind the renowned Stable Diffusion, showcases an uncanny ability to capture the intricate details of human anatomy, particularly the expressive nuances of hands. Whether you're envisioning a regal queen adorned with cosmic grandeur or a more whimsical scene, FLUX.1 brings your imagination to life with an unprecedented level of detail and realism. With its user-friendly interface and powerful capabilities, this AI tool empowers both professionals and enthusiasts alike to explore new realms of creativity. #AIArt #FLUX1 #CreativeRevolution
To view or add a comment, sign in
-
👓 Snap’s Glasses to Combine AR with AI Generations Snap’s glasses are aiming to merge AR with AI-generated content. It looks like a functional demo, but the question is, does it have real value? The push towards integration is clear across all major players. #SnapGlasses #AR #AIIntegration #TechTrends #AugmentedReality #ArtificialIntelligence #FutureTech
To view or add a comment, sign in
-
Dream Machine: AI-Powered Video Creation for All! 🎥✨ Luma Lab's Dream Machine is a game-changer in AI video generation! 🚀 Open to the public 🌍 High-quality, realistic videos from text and images 📝🖼️ 120 frames in 120 seconds ⚡ Action-packed shots with smooth motion 🏃♂️ Consistent characters and physics 👥🌍 Cinematic camera moves 🎥 https://lumalabs.ai/ Unleash your creativity and bring your ideas to life like never before! 💡 Try Dream Machine now and share your experiences in the comments. 💬 #DreamMachine #LumaLab #VideoAI #AICreativity
To view or add a comment, sign in