Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How can I install and use compilers for embedded C on an external server?

Short Question
Is there an accepted way to run compilers / linkers for embedded software projects on a remote server and still be able to program and debug the software on local machine.

Note I know every IDE will be different, so what I am after is how to define a work flow to accomplish this task assuming that the IDE can be ran using the .o/.elf files built from the remote server.

Areas of Concern
1) Networking to a virtual Windows machine.
2) How / when to transfer the source code to the server to build.

Background
Each family of microprocessor that our software team works with requires it's own compiler, IDE, and programmer. This, overtime, creates many difficulties to overcome.

1) Each developer requires it's own, often pricey, license.
2) To pick up a project that another developer started requires extra care to make sure all of the compiler settings are the same.
3) Supporting legacy software may require an old compilers that conflict with the currently installed compiler.
... the list goes on and on.

Edit: 7-10-2011 1:30 PM CST
1) The compilers I am speaking of are indeed cross compilers
2) A short list of processor families this system ideally would support: Motorola Coldfire, PIC and STM8.
3) Our Coldfire compiler is a variant of GCC, but we have to support multiple versions of it. All other compilers use a target specific compiler that does not provide a floating license.
4) To address littleadv, what I would like to accomplish is an external build server.
5) We currently use a combination of SVN and GIT hosted on an online repository for version control. This is in fact how I thought I would be transferring files to the build server.
6) We are stuck with Windows for most of the compilers.

I now believe that the direction to go is an external build server. There a few obstacles to over come yet. I will assume that we will have to transfer the source files to the server via version control software. Seeing how multiple product lines require access to the same compilers, having an instance for each project does not seem practical.

Would it make sense to create a repository for each compiler that would include folders for build, source, include, output, etc... then have scripts on the users' end that takes care of moving files form the IDE's file structure to the required structure for the compiler? This approach would keep the project repository from being thrashed and give a since of how many times a compiler has been used. Thanks for all of the great responses so far!

like image 783
Adam Lewis Avatar asked Jul 10 '11 06:07

Adam Lewis


People also ask

Which compiler is used in embedded C programming?

The GCC compiler is [citationneeded] the most popular C compiler for embedded systems. GCC was originally developed for 32-bit Princeton architecture CPUs.

What is embedded C compiler?

For Embedded C, a specific compilers that are able to generate particular hardware/micro-controller based output is used. Popular Compiler to execute a Embedded C language program are: Keil compiler. BiPOM ELECTRONIC. Green Hill software.

What is embedded compiler?

A compiler translates the high level programming languages in which programs are written into the machine languages so that the microprocessors which are embedded in electronic products can actually understand.

Which tool handles build process in most of the IDE in embedded C?

GNU Cross-Platform Development Toolchain.


3 Answers

In my opinion implementing an automated build server would be the cleanest solution to what you're trying to achieve. With an additional benefit...continuous integration! (I'll touch on CI a bit later).

There are plenty of tools out there to use. @Clifford has already mentioned CMake. But some others are:

  • Hudson (Open Source)
  • CruiseControl (Open Source)
  • TeamCity (Commercial - But it has a fairly generous free version that allows up to 3 build agents and 20 build configurations. The enterprise version of TeamCity is what my company uses so my answer will be geared towards this as it's what I know but concepts will likely apply across multiple tools)

So first of all I'll try to explain what we do and suggest how this might work for you. I don't suggest this is the accepted way to do things but it has worked for us. As I mentioned we use TeamCity for our build server. Each software project is added into TeamCity and build configurations are set up. The build configurations tell TeamCity when to build, how to build and where your project's SCM repository is. We use two different build configurations for each project, one we call "integration" which monitors the project's SCM repository and triggers an incremental build when a check-in is detected. The other configuration we call "nightly" which triggers at a set time every night and performs a completely clean build.

Incidentally just a quick note regarding SCM. For this to work cleanest I think the SCM for each project should be used in a stable trunk topology. If your developers all work from their own branches you'd probably need separate build configurations for each developer which I think would get unnecessarily messy. We've set up our build server with its own SCM user account but with read-only access.

So when a build is triggered for a particular build configuration the server grabs the latest files from the repository and sends them to a "build agent" which executes the build using a build script. We've used Rake to script our builds and automated testing but you can use whatever. The build agent can be on the same PC as the server but in our case we have a separate PC because our build server is centrally located with the ICT department whereas we need our build agent to be physically located with my team (for automated on-target testing). So the toolchains that you use are installed on your build agent.

How could this work for you?

Lets say you work for TidyDog and you have two projects on the go:

  1. "PoopScoop" is based on a PIC18F target compiled using the C18 compiler has its trunk located in your SCM at //PoopScoop/TRUNK/
  2. "PoopBag" is based on a ColdFire target compiled with GCC has its trunk located at //PoopBag/TRUNK/

The compilers that you need in order to build all projects are installed on your build agent (We'll call it TidyDogBuilder). Whether that's the same PC that's running the build server or a separate box is dependent on your situation. Each project has it's own build script (e.g. //PoopScoop/Rakefile.rb and //PoopBag/Rakefile.rb) which handles source file dependencies and invocation of the appropriate compilers. You could for example go to //PoopScoop/ in command prompt, enter rake and the build script would take care of compiling the PoopScoop project within the command prompt.

You then have your build configurations set up on the build server. A build configuration for PoopScoop for example would specify what SCM tool you're using and the repository location (e.g. //PoopScoop/TRUNK/), specify which build agent to use (e.g. TidyDogBuilder), specify where to find the appropriate build script and any necessary command to use (e.g. //PoopScoop/Rakefile.rb invoked with rake incremental:build) and specify what event triggers a build (e.g. Detection of a check-in to //PoopScoop/TRUNK/). So the idea is that if someone submits a change to //PoopScoop/TRUNK/Source/Scooper.c the build server detects this change, grabs the latest revisions of the source files from the repository and sends them to the build agent to be compiled using the build script and in the end emails every developer that has a change in build with the build result.

If your projects need to be compiled for multiple targets you would just modify the project's build script to handle this (e.g. You might have commands like rake build:PIC18 or rake build:Coldfire) and set up a separate build configuration on the build server for each target.

Continuous Integration

So with this system you get continuous integration up and running. Modify your build scripts to run unit tests as well as compile your project and you can have your unit testing being performed automatically after every change. The motive for this is to try to pick up problems as early as possible, as you're developing rather than being surprised during verification activities.

Closing Thoughts

  • Developers not having all toolchain installations would be somewhat dependent on what work they do most often. If it was me and my work tended to be mostly low level, interacting a lot with the hardware, not having the compilers on my workstation would annoy the bejeezes out of me. If on the other hand I was mostly working at an application level and could stub out hardware dependencies it may not be such a problem.
  • TeamCity has a plugin for Eclipse with quite a cool feature. You can do personal builds which means that developers can run build of a pending changelist against any given build configuration. This means that the developer initiate a build of pre-commited code on the build server without having to actually submit their code to SCM. We use this to trial changes against our current unit tests and static analysis as our expensive test tools are only installed on the build agent.
  • With regards to accessing build artifacts when "on the road" I agree, something like a VPN into your intranet is probably the easiest option.
like image 176
Jonathan Thomson Avatar answered Oct 22 '22 22:10

Jonathan Thomson


One part of your question that hasn't been addressed very much in the other answers (at least as of the time when I'm writing this) is how to transfer the files to the build server. I can provide some experience on that point, as my own development process is reasonably close to that part of your situation.

In my case, I use the "Unison" utility to mirror a section of my home directory on my development laptop to a section of home directory on the build servers. From a programmer's point of view, Unison is basically a wrapper around rsync, with a stored set of checksums to determine whether files on either end of the connection have been modified. It uses this to do a bi-directional synchronization; any modifications I make locally get transferred across to the remote end, and vice-versa. The usual mode of operation is to ask for confirmation of all transfers; you can turn that off, but I find it handy as a check to make sure I changed what I think I changed.

So, my usual workflow is:

  • Edit files locally on my development machine.
  • Run "unison" to synchronize those edits up to the build server.
  • In an ssh connection to the build server, run the compiler.
  • Also in that ssh connection, run the program, producing an output file. (This is of course a little different from your situation.)
  • Run "unison" again to synchronize the changed output file back to my development machine. (You'd be synchronizing the compiled program here.)
  • Look at the output results on my development machine.

It's not quite as fast as the edit-compile-run loop on a local machine, but I find that it's still fast enough not to be annoying. And it's much less heavyweight than using source control as an intermediary; you don't need to check in every edit you make and record it for posterity.

Also, Unison has the advantage of working pretty well across platforms; you can use it on Windows (most easily with Cygwin, though that's not required), and it can either tunnel over an SSH connection if you run an SSH server on your Windows machine, run its own connection service on the build server, or simply use a Windows file share and treat the build server as a "local" file. (Or you could put the server-side files on a Linux-based Samba share in the server farm, and mount that on your Windows build VMs; that may be easier than having "local" files in the VMs.)

Edit: Actually, this is sort of a modification of option 2 in littleadv's discussion of file transferring; it takes the place of editing the files directly on the server via Samba/NFS shares. And it works reasonably well in parallel with that -- I find that having a local cache of the files is ideal and avoids network-lag issues when working remotely, but other engineers at my company prefer something like sshfs (and, of course, on-site using Samba or NFS is fine). It all produces the same result from the point of view of the build server.

like image 2
Brooks Moses Avatar answered Oct 22 '22 22:10

Brooks Moses


I'm not sure I understand what you mean, but I'll try to answer what I think that the question is :-)

First of all, you're talking about cross-compilers, right? You're compiling on one system a code to be run on another system.

Second, you look for a "floating" license model, instead of having a dedicated compiler license per developer.

Third, you're looking to have a build machine where everyone will compile, instead of each developer compiling on his own machine.

These issues are not the same. I'll try to cover them:

  1. Cross compilers - some are free, some are licensed. Some come with IDE, some are just command line compilers that can be integrated into Eclipse/VS/SlickEdit/vi/whatelse. I don't know which one you're using, so let's take a look at Tornado (VxWorks compiler). It has it's own horrible IDE, but can be integrated into others (I've used SlickEdit with it's project files directly, works like a charm). The Tornado compiler requires a license and has several different models, so we'll look at it for the next two points.

  2. Floating licenses. Tornado, as an example, can come with either single license per installation model, or a floating license which will allocate licenses per request. If you're using a build machine - you'll need either a single license (in this case you can only run one instance at a time, which defeats the purpose), or floating license to run several instances at a time. Some cross compilers/libraries don't require licenses at all (various GCC flavors for example).

  3. Build machine - I've experienced using the VxWorks Tornado both as a dedicated compiler on my PC, and as a build-machine installation.

For a build machine you need a way to pass the code to build. That is a problem #2 for you:

2) How / when to transfer the source code to the server to build.

Sharing over the network is not a good idea as the network latency will make the compilation times unbearable. Instead, do one of these:

  1. Install the source control on the build machine, have the developers pass the code through the source control. That poses the risk of trashing the source control, so what we did was this:

  2. All the developer's checked out files on the build machine directly (a big UNIX system with a lot of storage), and each developer would edit the files using the network share (SAMBA or NFS), which, on the 1GB company LAN was fine, and compile locally on the build machine. Some would edit directly on the Unix system using the vi/emacs/Unix version of Tornado IDE, which I hated. That also answers your problem #1:

1) Networking to a virtual Windows machine.

(It doesn't have to be a virtual Windows machine, Linux can work too if you have the cross-compiler from Linux to your embedded system).

Hope that helps.

like image 1
littleadv Avatar answered Oct 22 '22 23:10

littleadv