Categories
Misc

Tensorflow.js Graph Object Detection

Im currently building a web-app that uses tensorflow to scan graph data and turn the data into a visualisation, at the minute the app can only detect a small number of objects based on the coco-ssd trained model (Person, phone, bottle) and I’m struggling with 1) finding other tensorflow models that I can implement to improve what can be detected. 2) tensorflow models that can scan for objects and data within a graph and 3) how to add another model into my code without breaking what already works. I’m very new to using tensorflow and machine learning but below is the code for the web-app that requires the tensorflow model.

code snippets on stack overflow

https://stackoverflow.com/questions/66015902/tensorflow-js-graph-object-detection

submitted by /u/Fawcett_C
[visit reddit] [comments]

Categories
Misc

Installing TensorFlow GPU on Windows 10 with compatible CUDA and cuDNN versions can be a cumbersome task. However, there is a little know fact that it can be done by just two commands if we are using Anaconda!! and I hope it equally works for Linux too.

Installing TensorFlow GPU on Windows 10 with compatible CUDA and cuDNN versions can be a cumbersome task. However, there is a little know fact that it can be done by just two commands if we are using Anaconda!! and I hope it equally works for Linux too. submitted by /u/TheCodingBug
[visit reddit] [comments]
Categories
Misc

Some help with dataset from images

Hello guys, I’m a bit new on tensorflow, I’m trying to make a dataset from ONE folder but the only thing I could made is make a dataset with separate folders using flow_from_directory, which will made a dataset in wich each class is a folder, but I want to make it just from one folder, could you please tell me some way in which I can make it?

submitted by /u/engdiazmu
[visit reddit] [comments]

Categories
Misc

bigger Dataset resulting in loss of NaN without exeeding RAM limits

I’m currently trying to build a model that can authenticate a person on their movement data (accelleration etc)

The dataset is built by me and stored in a JSON file for training in google colab. Sample Notebook

Now older versions of the dataset with less worked out fine. But the new version I got has more entries and sudenly I only get a Loss of NaN and Accuracy of 0.5, no matter what I do.

RAM seems to be an obvious reason, but the RAM usage tracker in colab shows normal levels (2-4gb of the available 13) Also I mocked up dummy datasets with the same, or even bigger sizes and they worked out fine.

Do you guys have any Idea what is going on here? My only idea going forward is to move over to TFRecords instead of the JSON file.

submitted by /u/Cha-Dao_Tech
[visit reddit] [comments]

Categories
Misc

Couple Questions about TF Serving

I’ve been reading about TF Serving quite a bit, trying to decide if it makes sense to be using it for some applications that I’m working on. As I’ve been studying up on it , I’ve run into a few things that I can’t seem to answer myself, so I thought I would turn to you beautiful people to see if I could find some answers that I haven’t been able to figure out so far.

1) Trying to build the Docker image in the first place. I read through the documentation on https://www.tensorflow.org/tfx/serving/docker and followed the directions to get my model into a Docker image. However, due to the constraints of what I’m working on, I need to be able to build the container from a Dockerfile in the first place. I found the Dockerfile for TF Serving on Github here: https://github.com/tensorflow/serving/blob/master/tensorflow_serving/tools/docker/Dockerfile.devel But when I build that image, it’s like… 20 times the size of the 300MB one that I get when following the instructions in the docs. I’m looking for a way to have a Dockerfile that I can build into the 300MB image… so that’s one question.

2) My model currently expects an input of a multi dimensional Tensor. With TF Serving using JSON (a requirement to use instead of gRPC on this project… comes from on high and can’t do anything about it), it looks like my options are basically to use something Base64 encoded. Is there a way to circumnavigate this so that I can send a multidimensional Tensor to my model or do I have to rebuild my model so that it can take in a Base64 image? Ideally… I would like to be able to send the file path to the TF Serving Docker and it would pick it up and go from there, but it doesn’t seem like that’s an option. So I suppose the question is… is base64 the only way to get an image to the model using JSON?

Thanks for any answers… I’ve been banging my head on this off and on for the last month and would love any input that you guys can give me!

submitted by /u/TypeAskee
[visit reddit] [comments]