Debug Server Scripting (DSS) generic loader 'loadti' caching of loaded objects

Hi,
using DSS (Debug Server Scripting) generic loader 'loadti' unit testing is performed in a Linux environment.
Consecutive automation tests are performed sequentially.

Often Java "Segmentation fault" occurs at some point, mostly after several preceeding 'loadti'-loaded tests performed well.
Once the failure occurs, it remains there at the same test (even on test-repetition).

  • testEnv.outFiles: ____.out
    Loading ____.out
    #
    # An unexpected error has been detected by Java Runtime Environment:
    #
    #  SIGBUS (0x7) at pc=0xf7e484fc, pid=46945, tid=2916301680
    #
    # Java VM: Java HotSpot(TM) Server VM (11.3-b02 mixed mode linux-x86)
    # Problematic frame:
    # C  [libc.so.6+0x1384fc]

Java that is in use is the one that comes with CCS5.5:  java version "1.6.0_13"

We have already discovered the cause for that behaviour:

  • 'loadti' caches loaded .out files in the following folder: /dev/shm/
    After tests get performed and 'loadtti' terminates, most of the cached objects remain in this as
    'OFS_outname.out_53394'
  • With a time this amount of data gets too large causing the Java error.
    Manual emptying this folder helps.

Finally the question:

  • How is it possible to control what happens with the content of the folder /dev/shm/ ?

Thanks.

  • Hello,

    That is interesting, I have not seen this issue before. I also do not see the *out files getting cached in the /dev/shm/ directory. However I used a more recent version of CCS. I'll need to try with CCSV5.5.

    Also, what Linux distro (and version) are you using?

    Thanks

    ki

  • Hello,
    the issue is observed on this machine (test server):

    • Red Hat Ent. Linux Server release 6.10 (Santiago)

    thanks to DSS-usage in docker container which (as object trash grows) runs out of memory.
    It is annoying to always have take care of size of object-trash in /dev/shm/ (by emptying it or others).

    By the way, DSS is used also on various workstations:

    • Red Hat Ent. Linux Server release 6.40 or newer versions 7

    Here workstation do not easily run out of memory, so Java "Segmentation fault" doesn't appear. However, the DSS object trash can be found in the folder /dev/shm/ as well. Trash-files (cached objects) are named according to the same pattern:
    OFS_<modulname>.out_53394

    I do not think an issue has anything to do with operating system (Linux), but DSS Java scripting behind it.

    Thanks,

    MladenS

  • I see the behavior when I run loadti with my CCSv5.5 installation. It looks like something that the CCS did with older versions that since has changed. I'll need to see if there is a way to turn this off in CCSv5. But note that CCSv5.5 is very old and unsupported. Any reason you still need to use this version?

    Thanks

    ki

  • Here's the strong reason for still using CCSv5.5:
    It is the last CCS version coming with (cycle-accurate) DSP-simulator support.
    Thanks,
    MladenS

  • It appears that the issue is not scripting (or loadti) specific, but will occur when a program is loaded by the CCS debugger (whether via scripting or IDE). It also does not appear to happen with CCS versions 6.x or greater. I do not see an option to turn off this behavior in CCSv5. As a simple workaround, I would suggest creating a little script that will clear the folder and then call loadti.sh). This should have the same effect.

    ki

  • DSS is started using a (Bash) script like this:
    $DDS_PATH/dss.sh $LOADTI_PATH/main.js ...

    Manually applying a small (post-processing)cleaning-script is exactly what I do at the moment.
    I thought there might be a possibility in DSS-scripting itself (or its options) to control this.

    Thanks a lot for checking this!
    MladenS