This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TLV1562 Data Sheet/Application Questions - URGENT

Other Parts Discussed in Thread: TLV1562, OMAP3530

I am extremely frustrated by the poor quality control of data sheets and the extremely tardy response from TI to support queries.  I am a consulting electronic engineer using the TI OMAP3530, multiple TLV1562 and a number of other TI parts in a new system design for one of my clients.  I found a large number of glaring omissions and mistakes in the TI OMAP CPU that cost us weeks of debug time, now I am having the same type of problems with the TLV1562 ADCs.

I posted a query (8 days ago - unanswered) on the high speed data converter forum and it was moved twice (I assume by forum moderators) and lost in the process - very frustrating!!!

I then called in 3 days ago and logged a support request - still no response!!!   I can only deduce that TI is not interested in taking care of customers - I should have used a Linear Technology converter, their data sheet completeness and quality is much better and their support is incredible!

Anyway, that's my beef, now to the questions.

I am using 4 of these converters configured in single-ended mode for a 16 channel analog input system and am planning to sequence through all the channels.  There are a number of issues that are not clear in the data sheet.

  1. Do the CR0 and CR1 registers have to be written together in sequence for every channel?
  2. Does CR1 apply to the whole chip (i.e. all 4 channels)?  
  3. Does mid-scale error calibration (internal offset) have to be done for every channel, or only once per chip, or does it have to be done for each sample and hold (i.e. 2 per chip)?
  4. Referencing the application note "Interfacing the TLV1562 Parallel ADC to the TMS320C54x DSP" there is a variable in the code called "CR_PROBLEM" which appears to indicate a problem, as there is also a note that says "reset old CSTART mode initialization otherwise the ADC never sets back it's INT pin to show a sample is available".  It is also evident that offset calibration is done using CSTART and then conversions are done using RD initiated conversions.  Why is this, the data sheet indicates that offset calibration and regular conversions can be initiated using RD or CSTART?
  5. In the timing requirements in the data sheet (p27), the following times do not appear to make sense td(CSL-WRL) and td(CSL-RDL).  They have a min of 2ns and a max of 4ns, surely the timing is not that tight that we have to meet a minimum of 2 ns and a max of 4 ns for CS to WR and CS to RD delays, that's only a 2 ns window and almost impossible to guarantee in a practical design?
  6. Some flow charts for setup and conversion sequencing would be a very welcome addition to this data sheet, it's certainly no user friendly, one has to read a lot between the lines and experiment!

Thank you for your prompt help.

Howard Robson

  • I am using the this part in single ended mode in a design.  There are a number of issues that are not clear in the data sheet.

    1. Do the CR0 and CR1 registers have to be written together in sequence for every channel?
    2. Does CR1 apply to the whole chip (i.e. all 4 channels)?
    3. Does mid-scale error calibration (internal offset) have to be done for every channel, or only once per chip?

    Thank you for your help.

    Howard

  • Hi Howard,

    I found your original post and added it here, my sincere apologies for the delays.

    CR0 and 1 do not need be be written in sequence, and you really only need to write CR1 once unless you plan on changing modes or entering software power down.  Cr0 needs to be written with each channel 'pair' access so that the device knows which input(s) to sample. 

    The mid scale error calibration depends on how you have the device configured - since you mentioned you want to use four of these parts for 16-channels, we're looking at 4 SE channels/device.  If you run in mono interrupt mode, you'll need to calibrate each channel by first selecting the channel to sample and then initiate a conversion.  Dual interrupt mode will calibrate 2 channels simultaneously.

    I am sorry to say that I can't answer your question on the application note at the moment - I'll need to research that a little.  It was done with an old processor in assembly with an EVM that's been discontinued.  The problem may have been hardware related.

    I'd have to do a little research on the timing from CSL to RDL or WRL as well - the data sheet states that /CS can be tied low, so I suspect at the moment that these numbers may be based on running the device at its absolute maximum speed while actively controlling /CS.

    Will look into the flow chart query as well - I may not be able to provide anything more than the one in the application note you mentioned.

  • Hi Tom,

    It's encouraging to receive a reply at last, the software engineer and I have spent 4 days of iterative trial an error to finally find what appears to be a solution to the problem so I wanted to fill you in on what we found.

    When the chips are powered up, they come up, in the majority of instances, with the /INT pin active (low), although once in a while, 1 of the 4 will be high.  With the pin low, the chip does not behave correctly if you start by initiating the normal sequence of writing CR0 (value 0010000000 for channel 0) , CR1 (value 0101000100) and then pulse /CSTART to initiate a conversion to measure the internal offset, the interrupt is already active so the /RD to obtain the result occurs immediately since the interrupt is pending (which I am sure is not correct), and after the /RD the /INT again goes active immediately and the chip remains in this out of sequence mode indefinitely.  So with this in mind we searched for a way to get the /INT pin high and found that one /RD does it (before doing any writing to CR0 or CR1), and once this is done, it appears to function correctly.

    We found this late yesterday and have not been able to test our solution out yet to see that we get good results.  We started out wanting to use /RD to initiate conversions and encountered problems and so switched to using /CSTART, but would prefer to use /RD so will have to see if our "fix" resolves the problems we were encountering there.

    Since this issue is complex, it's hard to explain it here and it would be much easier to talk it through on the phone.  We are on PST and I will be at the clients from 10:00 am onwards and would welcome the opportunity to talk this through.  I gave my cell # when I called in to request support and registered a support case (sorry I don't have the case # as it's written down on a data sheet at the clients site).

    Kind regards

    Howard Robson

    HRtronics, LLC

  • Hi Howard,

    It was nice chatting with you while I was in Asia, has everything been resolved now?

  • Hi Tom,

    Yes, it was a pleasure talking to you, and thank you for taking time out of your busy schedule to help me.  I did eventually get the read and write cycles working correctly, but had to extend to 2 states (FPGA state machine that controls access runs at 18 MHz so 1 state = 55 ns which should be enough according to data sheet).

    For my own piece of mind, I would still like to know what is meant by the the 2 ns to 4 ns range (min to max) on /CS to /WR and /CS to /RD as this does not make a lot of sense.  I am switching /CS and /RD or /WR on the same clock transition of the state machine so they occur simultaneously (0 ns from /CS to /RD or /WR), but as we discussed, it seems to make sense (reading between the lines) that there is no maximum time (since /CS can be held active), but no clear definitive conclusion for the minimum time.

    Thank you and regards

    Howard

  • Hi Tom,

    Correction to my statement above - I am NOT switching /CS and /RD or /WR on the same clock transition of the state machine, I switch /RD or /WR half a state machine cycle later (opposite clock edge), and this is what finally got the circuit working.

    Sorry for the confusion.

    Regards

    Howard

  • Thanks for the update Howard - I'll see if I can dig up anything else on the 2-4ns timing from /CS to /RD or /WR as well.

  • Hi Tom,

    We have not been in touch for some time.  I have a number of questions for you:

    1. Have you been able to dig up any additional data on the TLV1562 timing?
    2. Any recommendations regarding layout for the TLV1562?  We are experiencing some noise and are looking to improve the layout for the next rev of the board.
    3. When we spoke previously, you mentioned a few other newer parts that might be a good fit.  To refresh you on the requirements.  We need 16 channels, 8-10 bit resolution, 3 - 10 MSPs, preferably parallel bus type interface.

    Thank you for your help.

  • Hi Howard!

    Nothing new to tell you on the TLV1562 timing front, but if you want to send me your Gerber files, I'd be happy to review the layout for you.  Sixteen channels at that speed may be a challange - you might be looking at a two chip solution.  For the 3-10MSPS, is that per channel or effective throughput?   For instance - 8 channels simultaneously sampled at ~700KSPS for ~5.6MSPS (effective) versus 8 channels * 3MSPS/CH for a 24MSPS converter?

  • Hi Tom,

    Thank you for your prompt reply.  I am laying out a new daughter card to test out improvements to the layout, and may take you up on your offer after that is done.  We are not looking for 16 channels in one chip - the current solution uses 4 of the TLV1562s..  With my architecture, I am able to trigger all chips for a new conversion simultaneously.  We would like to achieve a minimum of 300 ksps on each of the 16 channels, so that approximates to about 4.8MSPS effective throughput, but it obviously depends a bit on the specifics of the bus interface of the ADC and how quickly I can read the data out of the ADCs (currently 1 common data bus to the 4 ADCs, but I may expend to reading 2 in parallel).

    Regards