This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

AWR6843ISK: Best practices for maintaining command line builds

Part Number: AWR6843ISK
Other Parts Discussed in Thread: CODECOMPOSER

Overview
This question is interesting in the sense that it’s kind of a gray area between build tools (gmake), build environment (CCS), and the underlying target platform (AWR6843ISK). If this is the wrong forum category, let me know and I will move it.

In my day-to-day development for the AWR6843ISK I heavily rely on CCS to compile, load, and debug our application. If I’m developing and need to include one of the the precompiled mmWave libraries, I simply add the dependency to the project, recompile, and I’m on my way.

However, underneath the covers CCS is managing those dependencies, and if selected to do so, will automagically generate the corresponding Makefile based on the Configuration (debug, release, ect).

Question
That’s where the question really comes into play. As we scale the project, CI servers will be involved, which will coordinate the nightly builds for the project. However, the CI’s are generally headless, and there’s no Code Composer GUI or anything like that by design. In this instance, you end up with two build systems, one being Make (or gmake in this case) for the headless CI, and the other being CCS for developers.

In terms of the TI Ecosystem, what is the best practice with respect to maintaining this sort of a relationship?

To elaborate a bit more, in my previous projects the net result is generally the same. We have an IDE (insert vendor here), or potentially just a front-end debugger (WinIDEA for example). Along with that, we have a standard make build system, which is completely independent of the IDE project structure. The CI relies exclusively on makefiles to build the project. For that same project, developers who need to debug will fire up the IDE to step through code and make changes when needed.

At the end of the day, we end up with two build systems to maintain, namely the IDE itself (project structure, dependency management etc), and then the makefiles for CLI builds. All too often someone (like me ..oops) who is debugging will make a dependency change, for example adding a library to the project XML, but forget to update the makefile. It compiles fine for the dev using CCS, but would of course cause a build failure if kicked off through make. This happens because the library was added in CCS, but not explicitly added to the makefile. Since Both build systems point to the same source, both need to be maintained.

Summary
So, all of that, just to ask a simple question

As I see it, from a TI Ecosystem perspective, we have 3 options for CLI builds:

  1. Install CodeComposer on the CI server of your choice, for example Jenkins, and then write scripts to interact with CCS as thoroughly explain here to kick off the build. This might be a hard sell to DevOps, as you're now pulling in a full blown IDE into the automated build process, along with all of it's external dependencies, drivers etc.

  2. Create your own makefiles. You wouldn’t necessarily have to start from scratch, just look at the existing makefile structure within the mmWave SDK OOB demo. Pull in the files of interest such as setenv.bat, checkenv.bat, mmWave_sdk.mak, etc and you're on your way. The downside being, you now have to maintain both CCS, and the makefiles.

  3. Use the autogenerated makefile (if enabled in CCS). As noted by the first comment in the autogenerated makefile # Automatically-generated file. Do not edit!. The downside being you cannot modify that file. Presumably any updates you make would be overwritten on the next build. There may be a way around that, but I'm not sure.

Each option above has their own set of pros and cons. But I am curious from a TI point of view, which do you prefer in your CI systems, what works best for you in terms of dependency management, and most importantly, which is the more scalable approach. Obviously it's somewhat dependent on the use case, but I am just looking for a general idea.