hi,
I'm trying to change the MNIST example code that will enable recognizing digits seen by the crazyflie camera instead of the saved images (my final goal is to change the model to object detection).
As for now I make the changes in "model.c" file based on the face detection example which uses the camera, but all the attempts are failing.
Is there a recommended way to do it or any instructions for it?
and for the future goal - Is replacing the h5 file to my network h5 weights file will be a good solution to implementing my network on the crazyflie?
thanks!
enabling camera input in the MNIST example
Re: enabling camera input in the MNIST example
Hi raphaelz!
Could you please provide some more details on what exactly you are doing and what the failure is?
Thanks
Jonas
Could you please provide some more details on what exactly you are doing and what the failure is?
Thanks
Jonas
Re: enabling camera input in the MNIST example
As Jonas said, maybe you can provide some details on how you tried to implement it, or even better, share some code? I think the face detection example is a good reference for what you are trying to achieve.
At the very least it is a good starting point if you are working in Tensorflow. It might very well work out of the box, but it depends on the version of TensorFlow you are using, and what layers your CNN architecture is made up of. The problem that might occur is that the optimizers convert layers to operations not supported by Greenwaves' nntool. If you run into errors loading the model into nntool, I would suggest inspecting the generated TensorFlow Lite (.tflite) model in a tool like Netron (netron.app) and verifying that the converted architecture a) makes sense and b) has only supported operations. You can review the supported operations in nntool by running "help tflite"