Categories
Misc

Tutorial: Accelerating Deep Learning with Apache Spark and NVIDIA GPUs on AWS

Learn how to create a cluster of GPU machines and use Apache Spark with Deep Java Library (DJL) on Amazon EMR to leverage large-scale image classification in Scala.

Learn how to create a cluster of GPU machines and use Apache Spark with Deep Java Library (DJL) on Amazon EMR to leverage large-scale image classification in Scala.

Categories
Misc

Tutorial: Cross-Compiling Robot Operating System Nodes for NVIDIA DRIVE AGX

In this post, we show you how ROS and DriveWorks can be used for building AV applications using a ROS package that we have put together.

In this post, we show you how ROS and DriveWorks can be used for building AV applications using a ROS package that we have put together.

Categories
Misc

Tutorial: Creating a Real-Time License Plate Detection and Recognition App

In this post, NVIDIA engineers show you how to use production-quality AI models such as License Plate Detection (LPD) and License Plate Recognition (LPR) models in conjunction with the NVIDIA Transfer Learning Toolkit (TLT).

In this post, NVIDIA engineers show you how to use production-quality AI models such as License Plate Detection (LPD) and License Plate Recognition (LPR) models in conjunction with the NVIDIA Transfer Learning Toolkit (TLT).

Categories
Misc

Tutorial: Creating Voice-based Virtual Assistants Using NVIDIA Jarvis and Rasa

Step-by-step tutorial to develop a voice-based virtual assistant and learn what it takes to integrate Jarvis ASR and TTS with Rasa NLP and Dialog Management (DM).

Develop a voice-based virtual assistant and learn what it takes to integrate Jarvis ASR and TTS with Rasa NLP and Dialog Management (DM).

Categories
Misc

Tutorial: Developing a Question Answering Application Quickly Using NVIDIA Jarvis

Learn how you can use Jarvis QA and the Wikipedia API action to create a simple QA application.

Learn how you can use Jarvis QA and the Wikipedia API action to create a simple QA application.

Categories
Misc

Couldn’t train nn for solving 2nd order ODE

I am trying to solve 2nd order ODE which is

y”+100y=0, y(0)=0, y'(0)=10 on [0, 1] interval

using neural network. Here is the code: https://colab.research.google.com/gist/rprtr258/717c07b72f2263ca0dc401c83e9179e5/2nd-order-ode.ipynb#scrollTo=zeub0DBC9pkr

But I have two problems:

  1. I guess tf recompiles(retraces) some function during learning which slows learning proccess significantly. Putting whole learning process into function doesn’t help.
  2. NN doesn’t fit at all. I guess it might be because of gradient size on last layer or something. Anyway it is difficult to test during 1.

Any help with problem 1 and maybe problem 2 will be appreciated.

submitted by /u/rprtr258
[visit reddit] [comments]

Categories
Misc

Running tensorflow for python in multiple cores ?

Hey guys,

Currently working on a tensorflow python script which I plan to use on a server with multiple cores .

The problem is that if I try to run the script in separate ssh sessions it will always default to the same core, and I need it to run in a different core each time so I can take advantage of all of the cores available .

Using tensorflow 2.2 so tf session is no longer available .

Can anyone please tell me how to achieve this ?

Thanks

submitted by /u/Triptonpt
[visit reddit] [comments]

Categories
Misc

NVIDIA Deep Learning Institute Releases New Accelerated Data Science Teaching Kit for Educators

As data grows in volume, velocity and complexity, the field of data science is booming. There’s an ever-increasing demand for talent and skillsets to help design the best data science solutions. However, expertise that can help drive these breakthroughs requires students to have a foundation in various tools, programming languages, computing frameworks and libraries. That’s Read article >

The post NVIDIA Deep Learning Institute Releases New Accelerated Data Science Teaching Kit for Educators appeared first on The Official NVIDIA Blog.

Categories
Misc

Bring AI to Market Fast with Pre-Trained Models and Transfer Learning Toolkit 3.0

Today, NVIDIA released several production-ready, pre-trained models and a developer preview of Transfer Learning Toolkit (TLT) 3.0, along with DeepStream SDK 5.1.

Intelligent vision and speech-enabled services have now become mainstream, impacting almost every aspect of our everyday life. AI-enabled video and audio analytics are enhancing applications from consumer products to enterprise services. Smart speakers at home. Smart kiosks or chatbots in retail stores. Interactive robots on factory floors. Intelligent patient monitoring systems at hospitals. And autonomous traffic solutions in smart cities. NVIDIA has been at the forefront of inventing technologies that power these services, helping developers create high-performance products with faster time-to-market. 

Today, NVIDIA released several production-ready, pre-trained models and a developer preview of Transfer Learning Toolkit (TLT) 3.0, along with DeepStream SDK 5.1. The release includes a collection of new pre-trained models—innovative features that support conversational AI applications—delivering a more powerful solution for accelerating the developer’s journey from training to deployment. 

Accelerate Your Vision AI Production 

Creating a model from scratch can be daunting and expensive for developers, startups, and enterprises. NVIDIA TLT is the AI toolkit that abstracts away the AI/DL framework complexity and enables you to build production quality pre-trained models faster, with no coding required.  

With TLT, you can bring your own data to fine-tune the model for a specific use case using one of NVIDIA’s multi-purpose, production-quality models for common AI tasks or use one of the 100+ permutations of neural network architectures like ResNet, VGG, FasterRCNN, RetinaNet, and YOLOv3/v4. All the models are readily available from NGC.

Key highlights for pre-trained models and TLT 3.0 (developer preview)

  • New vision AI pre-trained models: license plate detection and recognition, heart rate monitoring, gesture recognition, gaze estimation, emotion recognition, face detection, and facial landmarks estimation 
  • Support for conversational AI use cases with pre-trained models for automatic speech recognition (ASR) and natural language processing (NLP) 
  • Choice of training with popular network architectures such as EfficientNet, YoloV4, and UNET
  • Improved PeopleNet model to detect difficult scenarios such as people sitting down and rotated/warped objects
  • TLT launcher for pulling compatible containers to initialize
  • Support for NVIDIA Ampere GPUs with third-generation tensor cores for performance boost 

Get Started 

New Developer Webinar

Join the upcoming webinar “Using NVIDIA Pre-Trained Models and Transfer Learning Toolkit 3.0 to Create Gesture-based Interactions with a Robot” on March 3, 11 a.m. PT. We’ll demonstrate the entire end-to-end developer workflow in a video to show how easy the process is—from training to deployment—to build a gesture-recognition application with human-robot interaction. Register now >>

What Our Customers Are Saying

“INEX RoadView, our comprehensive automatic license plate recognition system for toll roads, uses NVIDIA’s end-to-end vision AI pipeline, production ready AI models, TLT, and DeepStream SDK. Our engineering team not only slashed the development time by 60% but they also reduced the camera hardware cost by 40% using Jetson Nano and Xavier NX. This enabled our vendors to deploy RoadView, the only out of the box ALPR solution, quickly and reliably. For us, nothing else came close.”

Dr. Roman Prilutsky, CEO/CTO, INEX

“We are enabling developers and third-party vendors to readily build intelligent AI apps leveraging Optra’s skills marketplace. As a new entrant to the Edge AI market, being able to differentiate our offerings and time to market was crucial. Readily available MaskRCNN from TLT and easy integration into DeepStream saved 25% development effort right out of the box for our R&D team.” 

Chad McQuillen, Senior Technical Staff Member & Solutions Architect for Optra, Lexmark Ventures

At Quantiphi, we use NVIDIA SDKs to build real-time video analytics workflows for many of our Fortune 500 customers across Retail and Media & Entertainment. Transfer Learning Toolkit provides an efficient way to customize training and model pruning for faster edge inference. DeepStream allows us to build high throughput inference pipelines on the Cloud and easily port them to the Jetson NX devices.”

Siddharth Kotwal, Solution Architecture Lead, Quantiphi

KION Group is working on robust AI-based distribution autonomy solutions across its brands, to address operational needs and logistics optimization challenges and greatly reduce flow exception events. Innovation, engineering and digital transformation services are benefiting from optimized NVIDIA pre-trained models while rapidly innovating and fine-tuning models on the fly using Transfer Learning Toolkit and deploying with NVIDIA DeepStream unlocking multi-stream density with Jetson platforms.

KION Group 

Categories
Misc

NVIDIA Releases Jarvis 1.0 Beta for Building Real-Time Conversational AI Services

Jarvis is a flexible application framework for multimodal conversational AI services that delivers real-time performance on NVIDIA GPUs.

Today, NVIDIA released Jarvis 1.0 Beta which includes an end-to-end workflow for building and deploying real-time conversational AI apps, such as transcription, virtual assistants and chatbots. Jarvis is a flexible application framework for multimodal conversational AI services that delivers real-time performance on NVIDIA GPUs.

This release of Jarvis includes new pre-trained models for conversation AI and support for Transfer Learning Toolkit (TLT) so enterprises can easily adapt apps to their specific use case and domain. These apps are able to understand context and nuance offering a better experience to users.

With Jarvis, enterprises get state-of-the-art models, ~10x speedup in development time using transfer learning with TLT, and fully optimized and GPU-accelerated pipelines for creating intelligent language-based applications that can run in real time.

Highlights from this version include:

  • ASR, NLU, and TTS models trained on thousands of hours of speech data.
  • TLT with zero coding approach to quickly re-train models on custom data.
  • Fully accelerated deep learning pipelines optimized to run as scalable services.
  • End-to-end workflow and tools to deploy services using one line of code. 

Conversational AI is opening new opportunities in every industry, from finance and healthcare to consumer services. 

Early adopters of Jarvis include InstaDeep, a company creating virtual assistants in the Arabic language. NVIDIA Jarvis played a significant role in improving their application’s performance. Using the NeMo toolkit in Jarvis, they were able to fine-tune an Arabic speech-to-text model to get a Word Error Rate as low as 7.84%.

One of the largest mobile network operators in Russia, MTS, is working with Jarvis for chatbots and virtual assistants for customer support. With Jarvis, they saw remarkable accuracy by fine-tuning the ASR models in the Russian language and higher throughout performance with TensorRT optimizations. 

Ribbon is leveraging Jarvis in their real-time communications and call processing platform to do advanced AI text-to-speech. Business and government organizations record tens of millions of calls every day, but it’s nearly impossible to search them to pull out important insights. Through Jarvis, recordings can now be turned into text so that AI tools can quickly search and analyze this data.

In the area of healthcare, Northwestern Medicine is working with Artisight to make hospitals smarter.

“At Northwestern Medicine, we aim to improve patient satisfaction and staff productivity with our suite of healthcare AI solutions,” said Andrew Gostine, MD, MBA, CEO of Artisight. “Conversational AI, powered by NVIDIA Clara Guardian and Jarvis, improves patient and staff safety during COVID-19 by reducing direct physical contact while delivering high-quality care. Jarvis ASR and TTS models make this conversational AI a reality. Patients now no longer need to wait for the clinical staff to become available, they can receive immediate answers from an AI-powered virtual assistant.”

Meanwhile Intelligent Voice, which has a system that uses speech recognition technology to capture calls, convert them into text and automatically send transcripts, saw great results with Jarvis.

“At Intelligent Voice, we provide high performance speech recognition solutions, but our customers are always looking for more,” said Nigel Cannings, CTO at Intelligent Voice. “Jarvis takes a multi-modal approach that fuses key elements of Automatic Speech Recognition with entity and intent matching to address new use cases where high-throughput and low latency are required. The Jarvis API is very easy to use, integrate and customize to our customers’ workflows for optimized performance.”

Figure 1: Leading adopters across all verticals.

NVIDIA Jarvis and Transfer Learning Toolkit are available freely for download to members of the NVIDIA developer program today. On the ‘Getting Started’ page, you will find several resources such as samples, Jupyter notebooks and tutorial blogs for new users.