This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

which method is best to save the large number of TS data to a file?

hello all:

  I test the TSIF0 to receive the Ts data,I want to save large number of data to a file ,for example 1G ,in order to not lack the packages,and can receive the ts data in time ,which method should I adapt ?

 Thanks in advance.

  • Are you using DM6467? Is your OS Linux?

  • hello,Paul.Yin:

      Yes,I am using DM6467,and OS is Linux,now I can save a small number of  Ts data to a file ,for example 100M ,but if I want to save more then 1G  ts data,I find  it will lost some packages.and the TSIF driver will stop for  a moment, and then go on receiving ,and stop ,and go on receiving ..........

      Now .in the application,I have two pthreads,one is used for copying the ts data from the driver space to a memory which is allocated by malloc funciton in the application,another pthread is used for reading the memory data and deal with them and save them to a file .the two pthread adopts the sem_wait and sem_post or pthread_cond_wait() and

    pthead_cond_signal()

    follow is the simple code:

    firsr pthread:

    do {
            pthread_mutex_lock(&prod->lock);
            while(((prod->writepos+1)%RING_COUNT)==prod->readpos)
            {
                pthread_cond_wait(&prod->notfull,&prod->lock);
            }
    #ifndef DEBUG
            /*printf("ready for kernel receive data pid7\n");*/
            ret = ioctl(tsif_rx_pid7_fd, TSIF_WAIT_FOR_RX_COMPLETE, 0);
            if (ret != 0)
                myprintf("TSIF_WAIT_FOR_RX_COMPLETE IOCTL for PID7 Failed\n");
    #endif
            /*printf("receive data pid7\n");*/
            memcpy(read_pid7_data_buf +prod->writepos*ONE_RING_SIZE*192, rx_pid7_buf,ONE_RING_SIZE * 192);
            prod->writepos++;
            rx_pid7_buf += (ONE_RING_SIZE * 192);
            read_end_offset += (ONE_RING_SIZE * 192);
            if (rx_pid7_buf >= (char *)(rx_pid7_addr + mmap_len))
                rx_pid7_buf = (char *)rx_pid7_addr;
            if(prod->writepos>=RING_COUNT)
                prod->writepos=0;

            pthread_cond_signal(&prod->notempty);
            pthread_mutex_unlock(&prod->lock);
        } while (read_end_offset < rx_pid7_file_size);

     

     

    another pthread:

    while(!(prod->ReceiveOver))
        {
            pthread_mutex_lock(&prod->lock);
            while(prod->readpos==prod->writepos)
            {
                    /*printf("read thread wait for ....\n");*/
                    /*pthread_testcancel();*/
                    pthread_cond_wait(&prod->notempty,&prod->lock);//zu se ,
                    /*pthread_testcancel();*/
            }       
            /* convert endian */
            ReadStartAddress=read_pid7_data_buf+192*ONE_RING_SIZE*(prod->readpos);   
            ptr_pid7_save_buf = (int *)ReadStartAddress;
            count =192*ONE_RING_SIZE/sizeof(int);//one block size in the buffer
            for (i = 0; i < count; i++) {
                *ptr_pid7_save_buf =
                    (((*ptr_pid7_save_buf & 0x000000ff) << 24) |
                     ((*ptr_pid7_save_buf & 0x0000ff00) << 8) |
                     ((*ptr_pid7_save_buf & 0x00ff0000) >> 8) |
                     ((*ptr_pid7_save_buf & 0xff000000) >> 24)
                    );
                ptr_pid7_save_buf++;
            }
            for (i = 0; i <ONE_RING_SIZE; i++)
                fwrite(ReadStartAddress+192*i+4,1,188,rx_pid7_file_fd);
            prod->readpos++;
            if(prod->readpos>=RING_COUNT)
                prod->readpos=0;
            pthread_cond_signal(&prod->notfull);
            pthread_mutex_unlock(&prod->lock);
            if(prod->ReceiveOver==1)
                pthread_exit(0);
        }

     

    any other better ways?

     

  • Hi,

    The occasional stoppage you are observing is due to Time Stamp mis-match. When we were testing TSIF we were not bothered about the performance, so we have not fine tuned the application for real time scenario. I'll check internally with the application team and will get back to you.

    Regards, Sudhakar