Categories
Misc

The Force is Strong with NVIDIA Jetson

NVIDIA Jetson Nano-powered training drone. Courtesy of Hacksmith IndustriesThis post features winners from the NVIDIA sponsored contest with Make: where makers submit their best robotics projects with a galactic theme.NVIDIA Jetson Nano-powered training drone. Courtesy of Hacksmith Industries

Earlier this year, NVIDIA sponsored a contest with Make: Magazine, asking makers to submit their best AI-enabled droid projects with a galactic theme. Below are the two droid contest winners. 

A life-sized 3D printed replica of Star Wars’ R2-D2, a service robot, stands tall and fully assembled with a white body, a silver domed head, and dark blue accents.
Figure 1. R2D2 robot. Courtesy of John Ferguson.

 Autonomous 3D Printed R2-D2

During the Covid lockdown, John Ferguson looked for a fun project to build with his 11-year-old son. Using a 3D printer, John took on the massive task of creating every inch of his robot. About 40 kg of filament and 10 months of printing later, he had himself the body of an R2 unit. This hands-on project required sanding, prepping each piece, filling, painting, and tuning for a truly movie-grade finish.

A few components of the R2-D2 robot include Sabertooth Motor controllers, Sony cameras, two scooter motors, Arduino, NVIDIA Jetson Nano, a Muse EEG brainwave reader for an active periscope mechanism, and it is controlled with an Xbox 360 wireless controller. This project is a way for the Ferguson family to learn about AI, integrating NVIDIA Jetson Nano to activate R2’s vision-enabled object recognition capabilities and speech recognition. 

“This is the first time we’ve done a project using object recognition and it’s a thrill. It really feels like the future is here! To teach children this capability and show them the interactivity—you really get stunned silence as a reaction,” said Ferguson.

Plain, uncolored 3D printed components are laid out on a wooden table, including gears and joints. The components are under a banana for size comparison. Most of the parts are equal or smaller than the banana.
Figure 2. 3D printed droid parts, banana for scale. Courtesy of John Ferguson.

When asked why John decided to implement the NVIDIA Jetson Nano into his project, he said that the plastic body of the R2 is heavy, so having a lightweight component is preferable. It’s also a premium option for object recognition using AI, easily integrated with Python apps, and a good system for a young person to learn with.

In the image, John’s son is working on inserting components onto the R2D2 incomplete frame to finalize the build of its head.
Figure 3. John’s son constructing droid. Courtesy of John Ferguson.

Their goal is to attend in-person events and have R2 autonomously strolling at their side using ROS2, identify other Star Wars characters accurately, and vocally respond just like the real thing. John and his son are building the body from scratch, training the recognition model using their own annotated image library, and optimizing the models. 

“I’m not a developer. We’ve learned everything from videos and tutorials. We had the time and the passion, and that’s got us to where we are through experimenting and persistence,” Ferguson said.

At the heart of the R2-D2 project, John hopes to showcase the robot to local schools, outline his journey so that it’s replicable, and talk about the personal growth that comes with building something from scratch and having fun with robotics.

“I want to encourage young people to enjoy developing technology,” said Ferguson.

Follow along with “build log” updates on their Facebook page. >>

AI Made Accessible with RoboJango

Figure 4. RoboJango holding an NVIDIA Jetson Nano Developer Kit. Courtesy of Jim Nason.

Next up, we have Jim Nason’s impressive Mandalorian-inspired droid named RoboJango.

This droid is packed with features that any Star Wars fanatic would be psyched to see. To name a few, this droid has HD vision for eyes, acoustic sensors, dual lidar to promote autonomy, heat sensors, off-roading ATV capabilities, and a powerful winch for getting out of sticky situations. Similar to the R2-D2 robot, the RoboJango incorporates 3D printed parts mounted onto a wood frame and steel core. 

RoboJango is surprisingly personable. It uses human-like movements and conversational AI skills, giving people in the room casual greetings, flexing its “muscles”, and spitting jokes. The RoboJango recognizes Jim’s family members, their pets, and harnesses object recognition capabilities by self-organizing to a DNN. This is done with several Arduinos, a battery matrix, customized software framework, and NVIDIA Jetson Nano as its brain.

The coolest thing about RoboJango is the maker, Jim Nason. With a 30-year professional history in programming, he started building this robot because his son asked for a 3D printer and wanted to build an android with the materials. 

RoboJango’s chest plate has been removed. In playful fashion, the inventor, Jim Nason, has placed three Jetson Nano Developer kit packages into the cavity of the human-like robot.
Figure 5. Space cowboy, RoboJango posing with its NVIDIA Jetsons. Courtesy of Jim Nason.

Three years ago, Jim started by just building a finger, then an arm, eventually leading to an AI-enabled robot. RoboJango also has functional anthropomorphic eyes, with a wire-based circulatory system based on a virtual representation of a human. 

Talking with Jim, you get the sense that he really understands the mantra of being a maker, which is to dream, learn, and innovate: 

“I’m teaching him how to play the ukulele and drums. Just need to work on the movements,” Nason said. 

Nason standing on a platform outside.
Figure 6. Jim Nason, winner of Make: contest.

He also built a best friend for RoboJango, its very own robot dog. 

Now, Jim wants to give back to the community and teach kids all about STEAM with his wacky and whimsical robotics projects. Since March of this year, Jim has taught over 1,500 virtual students across the United States and was awarded a 2021 Impact Award for his outstanding contributions to the classroom. On summer weekends, Jim hosted community builds in Long Beach where anyone could walk in and learn about robotics. 

“RoboJango was created to drive funding for Long Island robotics apprentices. We want to teach to all communities and allow kids to have a ball,” said Nason

Learn more about Jim’s robotics course here, and follow his adventures on Instagram. 


Thank you to our friends at Make: for hosting this contest. 

To see more NVIDIA Jetson Nano projects, visit our Jetson community project page for inspiration. 

Categories
Misc

If I Had a Hammer: Purdue’s Anvil Supercomputer Will See Use All Over the Land

Carol Song is opening a door for researchers to advance science on Anvil, Purdue University’s new AI-ready supercomputer, an opportunity she couldn’t have imagined as a teenager in China. “I grew up in a tumultuous time when, unless you had unusual circumstances, the only option for high school grads was to work alongside farmers or Read article >

The post If I Had a Hammer: Purdue’s Anvil Supercomputer Will See Use All Over the Land appeared first on The Official NVIDIA Blog.

Categories
Misc

Federated Learning With FLARE: NVIDIA Brings Collaborative AI to Healthcare and Beyond

NVIDIA is making it easier than ever for researchers to harness federated learning by open-sourcing NVIDIA FLARE, a software development kit that helps distributed parties collaborate to develop more generalizable AI models. Federated learning is a privacy-preserving technique that’s particularly beneficial in cases where data is sparse, confidential or lacks diversity. But it’s also useful Read article >

The post Federated Learning With FLARE: NVIDIA Brings Collaborative AI to Healthcare and Beyond appeared first on The Official NVIDIA Blog.

Categories
Misc

NVIDIA AI Enterprise Helps Researchers, Hospitals Targeting Cancer Hit the Mark

Whether facilitating cancer screenings, cutting down on false positives, or improving tumor identification and treatment planning, AI is a powerful agent for healthcare innovation and acceleration. Yet, despite its promise, integrating AI into actual solutions can challenge many IT organizations. The Netherlands Cancer Institute (NKI), one of the world’s top-rated cancer research and treatment centers, Read article >

The post NVIDIA AI Enterprise Helps Researchers, Hospitals Targeting Cancer Hit the Mark appeared first on The Official NVIDIA Blog.

Categories
Misc

MONAI Leaps Forward with AutoML-Powered Model Development and Cloud-Native Deployments

Graphic showing logos of MONAI Application Packages + HELMProject MONAI continues to expand its end-to-end workflow with new releases and a new component called MONAI Deploy Inference Service.
Graphic showing logos of MONAI Application Packages + HELM

Project MONAI continues to expand its end-to-end workflow with new releases and a new subproject called MONAI Deploy Inference Service.

Project MONAI is releasing three new updates to existing frameworks, MONAI v0.8, MONAI Label v0.3, and MONAI Deploy App SDK v0.2. It’s also expanding its MONAI Deploy subsystem with the MONAI Deploy Inference Service (MIS), a server that runs MONAI Application Packages (MAPs) in a Kubernetes Cluster as cloud-native microservices.

MIS helps expand the end-to-end capabilities of MONAI by integrating with a container orchestration system like Kubernetes. By using the Kubernetes framework, developers can quickly start testing their models. This allows moving the execution from local development to staging environments.

More information:

MONAI Core v0.8

MONAI Core v0.8 focuses on expanding its learning capabilities by both adding Self-Supervised and Multi-Instance learning support.  

Also included is a new state-of-the-art differential search framework called DiNTS that helps accelerate Neural Architecture Search (NAS) for large-scale 3D image sets like those found in medical imaging.

Highlights include:

  • Multi-instance learning with examples for the MSD dataset.
  • Visualization of transforms and notebook with approaches for 3D image transform augmentation.
  • Self-supervised learning with pretraining pipeline-leveraging vision transformer  tutorials, highlighting training with unlabeled data and adaptation for downstream tasks. 
  • DiNTS AutoML with examples using MSD tasks.

Get started with the new features using the included Jupyter Notebooks:

MONAI Label v0.3

MONAI Label v0.3 focuses on including multilabel segmentation support with DynUNet and UNETR networks as the base architecture options. It also focuses on enhanced performance with multi-GPU training support to improve scalability and usability improvements that make active learning easier to use.

Highlights include:

  • Multi-Label Segmentation Support
  • Multi-GPU Training
  • Active Learning UX Changes

MONAI Deploy 

MONAI Deploy App SDK v0.2

MONAI Deploy App SDK v0.2 continues to expand its base operators, including support for additional DICOM operations.

Highlights include:

  • Operator for DICOM Series Selection.
  • Operator for exporting DICOM Structured Reports SOP for classification results.

MONAI Deploy Inference Service v0.1

MONAI Deploy Inference Service v0.1 is the first component of the MONAI Deploy Application Server that continues to expand on the end-to-end workflow of MONAI.  It includes the ability to deploy MONAI Application Packages (MAPs) created by MONAI Deploy App SDK into a Kubernetes cluster.

Highlights include:

  • Register a MAP in the Helm Charts of MIS.
  • Upload inputs through a REST API request and make them available to the MAP container.
  • Provision resources for the MAP container.
  • Provide outputs of the MAP container to the client who made the request.

Check out the new MONAI Deploy tutorials that walk you through creating a MAP using App SDK, deploying the MIS Service, and pushing your MAP to MIS to be run as a cloud-native microservice.

You can find more in-depth information about each release under their respective projects in the Project MONAI GitHub.

Categories
Misc

Programming Distributed Multi-GPU Tensor Operations with cuTENSOR v1.4

NVIDIA cuTENSOR, version 1.4, library supports 64-dimensional tensors, distributed multi-GPU tensor operations, and improves tensor contraction performance models.

Today, NVIDIA is announcing the availability of cuTENSOR, version 1.4, which supports up to 64-dimensional tensors, distributed multi-GPU tensor operations, and helps improve tensor contraction performance models. This software can be downloaded now free of charge.

Download the cuTENSOR software.

What’s New?

  • Supports up to 64-dimensional tensors.
  • Supports distributed, multi-GPU tensor operations.
  • Improved tensor contraction performance model (i.e., algo CUTENSOR_ALGO_DEFAULT).
  • Improved performance for tensor contraction that have an overall large contracted dimension (i.e., a parallel reduction was added).
  • Improved performance for tensor contraction that have a tiny contracted dimension (
  • Improved performance for outer-product-like tensor contractions (e.g., C[a,b,c,d] = A[b,d] * B[a,c]).
  • Additional bug fixes.

For more information, see the cuTENSOR Release Notes.

About cuTENSOR

cuTENSOR is a high-performance CUDA library for tensor primitives; its key features include:

  • Extensive mixed-precision support:
    • FP64 inputs with FP32 compute.
    • FP32 inputs with FP16, BF16, or TF32 compute.
    • Complex-times-real operations.
    • Conjugate (without transpose) support.

Learn more

Recent Developer posts

Categories
Misc

Has anyone used the Tensorflow Lite Model Maker to make an object detection model for a Raspberry Pi? I am trying to make a model and in DESPERATE need of some help.

submitted by /u/Matthewdlr4
[visit reddit] [comments]

Categories
Misc

Tensorflow – Help Protect the Great Barrier Reef

Hi, everyone, hope you are doing well. I am new to Machine Learning and Tensorflow. I was wondering if anyone wants to team up or include me in your team. I would be very grateful. I want to work on a real-life project and this seems to be the best. Thank You.

submitted by /u/boringly_boring
[visit reddit] [comments]

Categories
Misc

Help with Tensorflow Lite

Is anyone here able to help me out make a tensorflow lite object detection model I can run on my pi? I have all of the training data collected and labeled just need help making the model.

I have tried a few things including the Tensorflow Lite Model Maker as well as doing it from scratch locally. Just need help making my model.

submitted by /u/Matthewdlr4
[visit reddit] [comments]

Categories
Misc

Noob Here! Can you answer something for me?

Afternoon!

I would like to create an app around community based image feedback. Is it possible to create a model around what the community rates your existing images & use it to tentatively give a new image a score before anyone votes on it? Can I also incorporate other factors in the image, such as distance between objects or color of items to further refine the model later on?

submitted by /u/programmrz
[visit reddit] [comments]