Hi,
How is the .y file made?
This thread has been locked.
If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.
My apologies, I left out a step because I needed to confirm it. TIDL will do a conversion to BGR (rather than RGB) image format when using the model, but this conversion is not done automatically on the sample image or raw input images. To fix this, you'll need to convert the image to BGR format before converting to raw. Instead of the single command I used before, please use the two I am posting below.
convert $input_filename -separate +channel -swap 0,2 -combine -colorspace sRGB ./sample_bgr.png
convert ./sample_bgr.png -interlace plane BGR:sample_img.raw
Hi
First thank you, after i command the above two, and i got the .raw file.
But when i command the following :
tidl_model_import.out /home/lab/Documents/sdk5.03/linux-devkit/sysroots/x86_64-arago-linux/usr/share/ti/tidl/uti
ls/test/testvecs/config/import/caffe/tidl_import_alex66.txt
and the result :
Name of the Network : CaffeNet
Num Inputs : 0
Input layer(s) not Available.. Assuming below one Input Layer !!Unsuported Layer Type : Input !!!! assuming it as pass through layer
[libprotobuf FATAL /oe/bld/build-CORTEX_1/arago-tmp-external-linaro-toolchain/work/x86_64-nativesdk-arago-linux/nativesdk-tidl-import/01.01.00.00-r0/recipe-sysroot-native/usr/include/google/protobuf/repeated_field.h:1478] CHECK failed: (index) < (current_size_):
terminate called after throwing an instance of 'google::protobuf::FatalException'
what(): CHECK failed: (index) < (current_size_):
Aborted (core dumped)
the last words indicate : "CHECK failed: (index) < (current_size_):,", i don't know the reason,what should i do?
Hi,
Are you trying to do an implementation of AlexNet? This is not supported by us because there is not a 'Normalize' layer in TIDL.
Yes,indeed. I actually use the AlexNet model . Here is the my situation , i already trained two models for a image classification task , the one is AlexNet model, it's accuracy almost have 99.9%. The another one is ShuffleNet, although it's training faster than AlexNet about 5 times, but it's accuracy only have 90%.
On this website :http://software-dl.ti.com/processor-sdk-linux/esd/docs/latest/linux/Foundational_Components_TIDL.html#verified-networks-topologies it's said support Caffe-Jacinto11 model, how it's accuracy may only have 80 %( i didn't run this model and just saw it on the website like github ,maybe it is higher now ) .
My purpose is use a model that can run on TIDL through the AM5708 to inference the result , not use TI devcie translator tool that model can fit TIDL (like Caffe-Jacinto11 ) is very well , however the accuracy shouldn't be too low, what should i do , can you give me some advice ,thanks a lot.
[libprotobuf ERROR google/protobuf/text_format.cc:288] Error parsing text-format caffe.NetParameter: 356:25: Message type "caffe.LayerParameter" has no field named "shuffle_channel_param".
ERROR: Reading text proto file
Name of the Network : ShuffleNet V2
Num Inputs : 1
Unsuported Layer Type : ConvolutionDepthwise !!!! assuming it as pass through layer
Unsuported Layer Type : ConvolutionDepthwise !!!! assuming it as pass through layer
Unsuported Layer Type : ShuffleChannel !!!! assuming it as pass through layer
Num of Layer Detected : 14
0, TIDL_DataLayer , data 0, -1 , 1 , x , x , x , x , x , x , x , x , 0 , 0 , 0 , 0 , 0 , 1 , 3 , 224 , 224 , 0 ,
1, TIDL_ConvolutionLayer , conv1 1, 1 , 1 , 0 , x , x , x , x , x , x , x , 1 , 1 , 3 , 224 , 224 , 1 , 24 , 112 , 112 , 8128512 ,
2, TIDL_PoolingLayer , pool1 1, 1 , 1 , 1 , x , x , x , x , x , x , x , 2 , 1 , 24 , 112 , 112 , 1 , 24 , 56 , 56 , 677376 ,
3, TIDL_UnSuportedLayer , resx1_match_DWconv 1, 1 , 1 , 2 , x , x , x , x , x , x , x , 3 , 1 , 24 , 56 , 56 , 1 , 24 , 56 , 56 , 0 ,
4, TIDL_BatchNormLayer , resx1_match_DWconv_bn 1, 1 , 1 , 3 , x , x , x , x , x , x , x , 3 , 1 , 24 , 56 , 56 , 1 , 24 , 56 , 56 , 75264 ,
5, TIDL_BatchNormLayer , resx1_match_DWconv_scale 1, 1 , 1 , 3 , x , x , x , x , x , x , x , 4 , 1 , 24 , 56 , 56 , 1 , 24 , 56 , 56 , 75264 ,
6, TIDL_ConvolutionLayer , resx1_match_conv 1, 1 , 1 , 4 , x , x , x , x , x , x , x , 5 , 1 , 24 , 56 , 56 , 1 , 122 , 56 , 56 , 9182208 ,
7, TIDL_ConvolutionLayer , resx1_conv1 1, 1 , 1 , 2 , x , x , x , x , x , x , x , 6 , 1 , 24 , 56 , 56 , 1 , 122 , 56 , 56 , 9182208 ,
8, TIDL_UnSuportedLayer , resx1_conv2 1, 1 , 1 , 6 , x , x , x , x , x , x , x , 7 , 1 , 122 , 56 , 56 , 1 , 122 , 56 , 56 , 0 ,
9, TIDL_BatchNormLayer , resx1_conv2_bn 1, 1 , 1 , 7 , x , x , x , x , x , x , x , 7 , 1 , 122 , 56 , 56 , 1 , 122 , 56 , 56 , 382592 ,
10, TIDL_BatchNormLayer , resx1_conv2_scale 1, 1 , 1 , 7 , x , x , x , x , x , x , x , 8 , 1 , 122 , 56 , 56 , 1 , 122 , 56 , 56 , 382592 ,
11, TIDL_ConvolutionLayer , resx1_conv3 1, 1 , 1 , 8 , x , x , x , x , x , x , x , 9 , 1 , 122 , 56 , 56 , 1 , 122 , 56 , 56 , 46676224 ,
12, TIDL_ConcatLayer , resx1_concat 1, 2 , 1 , 5 , 9 , x , x , x , x , x , x , 10 , 1 , 122 , 56 , 56 , 1 , 244 , 56 , 56 , 1 ,
13, TIDL_UnSuportedLayer , shuffle1 1, 1 , 1 , 10 , x , x , x , x , x , x , x , 11 , 1 , 244 , 56 , 56 , 1 , 244 , 56 , 56 , 0 ,
Total Giga Macs : 0.0748
Processing config file ./tempDir/qunat_stats_config.txt !
0, TIDL_DataLayer , 0, -1 , 1 , x , x , x , x , x , x , x , x , 0 , 0 , 0 , 0 , 0 , 1 , 3 , 224 , 224 ,
1, TIDL_ConvolutionLayer , 1, 1 , 1 , 0 , x , x , x , x , x , x , x , 1 , 1 , 3 , 224 , 224 , 1 , 24 , 112 , 112 ,
2, TIDL_PoolingLayer , 1, 1 , 1 , 1 , x , x , x , x , x , x , x , 2 , 1 , 24 , 112 , 112 , 1 , 24 , 56 , 56 ,
3, TIDL_ReshapeLayer , 1, 1 , 1 , 2 , x , x , x , x , x , x , x , 3 , 1 , 24 , 56 , 56 , 1 , 24 , 56 , 56 ,
4, TIDL_BatchNormLayer , 1, 1 , 1 , 3 , x , x , x , x , x , x , x , 3 , 1 , 24 , 56 , 56 , 1 , 24 , 56 , 56 ,
5, TIDL_BatchNormLayer , 1, 1 , 1 , 3 , x , x , x , x , x , x , x , 4 , 1 , 24 , 56 , 56 , 1 , 24 , 56 , 56 ,
6, TIDL_ConvolutionLayer , 1, 1 , 1 , 4 , x , x , x , x , x , x , x , 5 , 1 , 24 , 56 , 56 , 1 , 122 , 56 , 56 ,
7, TIDL_ConvolutionLayer , 1, 1 , 1 , 2 , x , x , x , x , x , x , x , 6 , 1 , 24 , 56 , 56 , 1 , 122 , 56 , 56 ,
8, TIDL_ReshapeLayer , 1, 1 , 1 , 6 , x , x , x , x , x , x , x , 7 , 1 , 122 , 56 , 56 , 1 , 122 , 56 , 56 ,
9, TIDL_BatchNormLayer , 1, 1 , 1 , 7 , x , x , x , x , x , x , x , 7 , 1 , 122 , 56 , 56 , 1 , 122 , 56 , 56 ,
10, TIDL_BatchNormLayer , 1, 1 , 1 , 7 , x , x , x , x , x , x , x , 8 , 1 , 122 , 56 , 56 , 1 , 122 , 56 , 56 ,
11, TIDL_ConvolutionLayer , 1, 1 , 1 , 8 , x , x , x , x , x , x , x , 9 , 1 , 122 , 56 , 56 , 1 , 122 , 56 , 56 ,
12, TIDL_ConcatLayer , 1, 2 , 1 , 5 , 9 , x , x , x , x , x , x , 10 , 1 , 122 , 56 , 56 , 1 , 244 , 56 , 56 ,
13, TIDL_ReshapeLayer , 1, 1 , 1 , 10 , x , x , x , x , x , x , x , 11 , 1 , 244 , 56 , 56 , 1 , 244 , 56 , 56 ,
14, TIDL_DataLayer , 0, 1 , -1 , 11 , x , x , x , x , x , x , x , 0 , 1 , 244 , 56 , 56 , 0 , 0 , 0 , 0 ,
TIDL returned with error code : -1100, refer to interface header file for error code details
Error at line: 1596 : in file src/tidl_tb.c, of function : test_ti_dl_ivison
End of config list found !
It's seems to be unsupported the " convolutionDepthwise " layer, but this time it's better than Alexnet, because it produce configFileslist.txt, qunat_stats_config.txt, temp_net.bin, three files under the tempDir folder, and tidl_net_shffle.bin, tidl_param_shuffle.bin under the tidl_models folder, can i think it works ?
Besides , if it doesn't works, can i just delete the ConvolutionDepthwise “” layer on the Shufflenet , or delete the "Normalize " layer on the Alexnet , so that can fit the TIDL; Or is there any method to improve the caffe_ jacinto11 's accuracy reach the 90%, thanks a lot .
It seems both of those networks include layers unsupported by TIDL. It is worth noting that running deep learning models on an embedded platform will require sacrifices in accuracy for the network to produce results quickly enough to be actionable. This is briefly explained before the list of verified network topologies. The set of verified topologies are chosen with this in mind.
Removing a normalization layer might be fine for Alex net, though I wouldn't recommend it. I would expect the shuffle layer to be important for ShuffleNet though. You can try removing these, but I would not expect great results. The tidl_net*.bin and tidl_param*.bin were created I see, but they probably did not finish. I would be surprised if those worked.
I'm not sure I understand what you mean by:
"My purpose is use a model that can run on TIDL through the AM5708 to inference the result , not use TI devcie translator tool that model can fit TIDL (like Caffe-Jacinto11 ) is very well , however the accuracy shouldn't be too low, what should i do , can you give me some advice ,thanks a lot."
particularly the part on not using the device translator tool. When you train one of these models, you'll need to convert it into a pair of binaries describing the network and weights for it to run on through TIDL on an embedded platform. This is a requirement if you wish to run this on an AM57X processor.
My recommendation would be to use one of the topologies verified for TIDL or to find another suitable network that only contains supported layers.
Hi,
Thank you so much for your explanation , after read you replay , i know i need the device translator tool to convert the model into a pair of binaries which describing the network and weights for it to run on through TIDL on an embedded platform.
Today , i try to install the Caffe-Jacinto , after changed the files "Makefile" ,"Makefile.config" and during the installation some problem already be fixed through searching the web, but the following problems appear 3 times :
CXX examples/siamese/convert_mnist_siamese_data.cpp
CXX .build_release/src/caffe/proto/caffe.pb.cc
AR -o .build_release/lib/libcaffe-nv.a
LD -o .build_release/lib/libcaffe-nv.so.0.17.0
CXX/LD -o .build_release/tools/upgrade_net_proto_binary.bin
CXX/LD -o .build_release/tools/compute_image_mean.bin
CXX/LD -o .build_release/tools/caffe.bin
CXX/LD -o .build_release/tools/upgrade_solver_proto_text.bin
.build_release/lib/libcaffe-nv.so: undefined reference to `cudnnSetConvolutionGroupCount'
.build_release/lib/libcaffe-nv.so: undefined reference to `cudnnSetConvolutionMathType'
collect2: error: ld returned 1 exit status
Makefile:654: recipe for target '.build_release/tools/compute_image_mean.bin' failed
make: *** [.build_release/tools/compute_image_mean.bin] Error 1
make: *** Waiting for unfinished jobs....
.build_release/lib/libcaffe-nv.so: undefined reference to `cudnnSetConvolutionGroupCount'
.build_release/lib/libcaffe-nv.so: undefined reference to `cudnnSetConvolutionMathType'
collect2: error: ld returned 1 exit status
Makefile:654: recipe for target '.build_release/tools/upgrade_solver_proto_text.bin' failed
make: *** [.build_release/tools/upgrade_solver_proto_text.bin] Error 1
.build_release/lib/libcaffe-nv.so: undefined reference to `cudnnSetConvolutionGroupCount'
.build_release/lib/libcaffe-nv.so: undefined reference to `cudnnSetConvolutionMathType'
collect2: error: ld returned 1 exit status
Makefile:654: recipe for target '.build_release/tools/caffe.bin' failed
make: *** [.build_release/tools/caffe.bin] Error 1
.build_release/lib/libcaffe-nv.so: undefined reference to `cudnnSetConvolutionGroupCount'
.build_release/lib/libcaffe-nv.so: undefined reference to `cudnnSetConvolutionMathType'
collect2: error: ld returned 1 exit status
Makefile:654: recipe for target '.build_release/tools/upgrade_net_proto_binary.bin' failed
make: *** [.build_release/tools/upgrade_net_proto_binary.bin] Error 1
lab@lab-System-Product-Name:/home/caffeinstall/caffe$
Until now , i can't find the reason and don't know how to solve this , would you like help me ? Thank you advance ;
Warm regards
Hi
There is one thing im not sure. Can i use the version of 3.4 opencv .Or must 3.1 opencv version. Thank you advance.