Hajo on the BlackArmor Forums has an older posting about getting Debian Linux 5.0 (Lenny) installed on BA NAS 110/220/4x0. This is not a port that includes the kernel but simply a minimum install that gets the system setup to install binaries out of the Lenny EABI ARM platform. The kernel that comes with the BA NAS is compatible with those binaries. The newer kernel for the Debian 6 or higher is not compatible with the BA NAS. This has some limitations but offers a way to get to some newer software pre-compiled. I don't want to loose the existing functionality on my test system but the draw to DLNA services is pretty strong right now.
To top it off, Debian has a nicely setup cross-compilation setup documented for people working on non-Intel platforms. This offers a way to compile newer software without killing myself anymore on building the entire compiler and supporting software myself.
The goal has always been to make the NAS device useful and I want to play my movies off it to my TV upstairs so this might be the next thing I play with on the development NAS.
Showing posts with label ARM. Show all posts
Showing posts with label ARM. Show all posts
Wednesday, February 22, 2012
Wednesday, December 28, 2011
Serial Port on BlackArmor NAS
I found my old Samsung x426 USB cell phone programmer cable as I was cleaning out my home office. I might have mentioned this cable earlier when I wrote about adding a serial console to the BlackArmor NAS. The cable is a really old style one that has a weird connector for the cell phone way before the mini-USB became standard. The interesting thing about this cable is that it has a serial to USB converter chip (2303HXC 0546) that does the magic of converting serial to USB. That is why that thing was ridiculously expensive ($35) when I bought it back in the day.
Why this is even on my radar as possible is that the website CrapNAS had an entry for how to connect a serial or USB cable so you can watch the Linux boot up as a serial console session. They specify two different ways to do this. For the serial connection, they specify a MAX3232 as necessary. The USB connection has a schematic that includes a 2303HX. I'm not sure what the difference is between that and my 2303HXC so I will be doing some reading before I cut into the cable and get out the soldering iron.
Why am I even messing around with a serial console? Because as I am planning to mess around with the lower level system, I should have a back out plan if I do something wrong. A serial console on the device gives me more options during the boot up even if I cannot connect via a network connection.
As far as getting time to work on the compiler toolchain, I've been relaxing over the holiday break with family, cleaning my messy home office and have not even booted up the virtual machine since my last post. I intend to get back to it someday.
Why this is even on my radar as possible is that the website CrapNAS had an entry for how to connect a serial or USB cable so you can watch the Linux boot up as a serial console session. They specify two different ways to do this. For the serial connection, they specify a MAX3232 as necessary. The USB connection has a schematic that includes a 2303HX. I'm not sure what the difference is between that and my 2303HXC so I will be doing some reading before I cut into the cable and get out the soldering iron.
Why am I even messing around with a serial console? Because as I am planning to mess around with the lower level system, I should have a back out plan if I do something wrong. A serial console on the device gives me more options during the boot up even if I cannot connect via a network connection.
As far as getting time to work on the compiler toolchain, I've been relaxing over the holiday break with family, cleaning my messy home office and have not even booted up the virtual machine since my last post. I intend to get back to it someday.
Sunday, December 4, 2011
Newlib error during compile
Earlier I was working on the problem with compiling using the arm-elf-gcc on very basic programs. After some work, I found the base problem is that the libc replacement from newlib are not being picked up at compile time and properly using the syscalls mechanism. Hardwired hacks to work around it produced an executable but something is still wrong with how the compiler was built.
I've distilled the error down to a single search.
Google Search: "/lib/libc.a(lib_a-exit.o): In function `exit':" "newlib/libc/stdlib/exit.c:65: undefined reference to `_exit'"
Several other people appear to have run into the same problem, so I'll be reading up on the problem and see what solutions exist.
I've distilled the error down to a single search.
Google Search: "/lib/libc.a(lib_a-exit.o): In function `exit':" "newlib/libc/stdlib/exit.c:65: undefined reference to `_exit'"
Several other people appear to have run into the same problem, so I'll be reading up on the problem and see what solutions exist.
Saturday, December 3, 2011
Compilation failures
The problem is that I can get an ARM executable but not sure why the default system is not working to produce a basic HelloWorld.
$ cat test.c
int main (){return 0;}
/home/mcgarrah/DevelToolbin/binaries/arm-4.4.6/bin/../lib/gcc/arm-elf/4.4.6/../../../../arm-elf/lib/libc.a(lib_a-exit.o): In function `exit':
exit.c:(.text+0x54): undefined reference to `_exit'
collect2: ld returned 1 exit status
$ cat test.c
int main (){return 0;}
$ arm-elf-gcc test.c -o test
.../home/mcgarrah/DevelToolbin/binaries/arm-4.4.6/bin/../lib/gcc/arm-elf/4.4.6/../../../../arm-elf/lib/libc.a(lib_a-exit.o): In function `exit':
exit.c:(.text+0x54): undefined reference to `_exit'
collect2: ld returned 1 exit status
$ arm-elf-gcc test.c -o test -v
This spews a couple of pages of additional output which points me to something called the "Using built-in specs" and lots of directory path information for various things like include files and libraries. These all look about right. A directory /DevelToolbin/binaries/arm-4.4.6/arm-elf/lib has some interesting lib files in it.
$ arm-elf-gcc test.c -o test ~/DevelToolbin/binaries/arm-4.4.6/arm-elf/lib/redboot-syscalls.o
$ file test
test: ELF 32-bit LSB executable, ARM, version 1, statically linked, not stripped
When you are facing a missing library like the above issue, you typically just have to find the right library and add it to your build. In this case we are missing something so fundamental that we need to figure out why it is broken or will hit future issues.
While finding the above syscall libraries, I noticed some files called specs files which are:
- linux.specs
- rdimon.specs
- rdpmon.specs
- redboot.specs
RDP, RDI, RedBoot and Linux are all syscall (system call) protocols. A syscall protocol describes how a libc (standard C library) communicates with the operating system kernel. For our case this library is newlib which uses another syscall protocol called libgloss as an interface between libc and the above syscall protocols. I'm not sure which protocols are the default but something is not right about this combination.
$ arm-elf-gcc -dumpspecs
This dumps even more output that is even more cryptic but looks important when taken with the above information. There are built-in defaults for GCC that are being overridden by these specs files.
There is an options to change the specs entries from the GCC command line which I use to identify what is happening.
$ arm-elf-gcc test.c -o test -specs=redboot.specs
$ arm-elf-gcc test.c -o test -specs=pid.specs
$ arm-elf-gcc test.c -o test -specs=linux.specs
The pids and redboot files are a very minor difference in a setting so are essentially the same. Linux is significantly different as a system call interface.
$ arm-elf-gcc test.c -o test -specs=rdimon.specs
$ arm-elf-gcc test.c -o test -specs=rdpmon.specs
Both the RDI and RDP return basically the same error as the above missing library. So we have identified what is probably the default libraries used by libgloss.
This is just a journey down the rabbit hole. So I need to revisit the newlib build process and see what I did wrong in it.
Update December 4th, 2011 7:30pm: What you see above happens in both the gcc 4.4.6 and gcc 4.3.2 compiled software sets. Same exact problem for both versions of gcc and associated other libraries. I did a full rebuild of the 4.3.2 and did some tracing of the path built into gcc and re-verified paths to all libraries. Same issue still happens. So we have a genuine problem in the build process that is impacting the compiler. Could be in any place across the binutils, newlib or gcc compilations. So I'll be digging into each. The nice part is it looks like a common problem so if I fix it in the 4.3.2 series then it will probably fix in the 4.4.6 series as well.
This is just a journey down the rabbit hole. So I need to revisit the newlib build process and see what I did wrong in it.
Update December 4th, 2011 7:30pm: What you see above happens in both the gcc 4.4.6 and gcc 4.3.2 compiled software sets. Same exact problem for both versions of gcc and associated other libraries. I did a full rebuild of the 4.3.2 and did some tracing of the path built into gcc and re-verified paths to all libraries. Same issue still happens. So we have a genuine problem in the build process that is impacting the compiler. Could be in any place across the binutils, newlib or gcc compilations. So I'll be digging into each. The nice part is it looks like a common problem so if I fix it in the 4.3.2 series then it will probably fix in the 4.4.6 series as well.
Thursday, December 1, 2011
Toolchains compiled
Two full toolchains built and a third that I still think might be made to work. The first is using older versions of everything and was mostly done as a test to get the build environment working against known sources that are known to build. Even this known build process required some effort to get working in a current OS environment. Those docs, notes and scripts will be coming in the near future.
So to outline what works and not, I give you the following.
Toolchain that comes from older versions of software and the docs from Tom Walsh:
While this is good news, it is not an ARM executable. My followup post will not be so upbeat.
So to outline what works and not, I give you the following.
Toolchain that comes from older versions of software and the docs from Tom Walsh:
- binutils-2.19.1a.tar.bz2
- gcc-4.3.2.tar.bz2 (with a patch from Tom)
- newlib-1.16.0.tar.gz (with a patch from Tom)
- insight-weekly-CVS-7.0.50-20091130.tar.bz2
Newest versions that compiled based on Tom's scripts:
- binutils-2.22.tar.bz2
- gcc-4.4.6.tar.bz2
- newlib-1.19.0.tar.gz
- insight-CVS-20111130.tar.bz2 (pulled from CVS head and required patching by me)
Newest versions that fails to compile in GCC in zlib:
The more exciting thing is that it looks like both GCC versions that compiled will compile code to an intermediate state. That is not proof that it generates a working executable but it is a step in the right direction.
- binutils-2.22.tar.bz2
- gcc-4.6.2.tar.bz2
- newlib-1.19.0.tar.gz
- insight-CVS-20111130.tar.bz2 (pulled from CVS head and required patching by me)
The issue in GCC is well documented (if you know what you are looking for) as a bug in the "--enabled-multilib" during the build. The zlib library that is packaged with the GCC source fails to build in a cross-compiled configuration. Who knew that GCC packages their own copy of zlib in the GCC sources? There appear to be a couple of solutions which might fix the problem. The first is to just use the native zlib from the host system and pass in "--with-system-zlib" but that feels like a hack instead of a fix. The other is to revert a change in GCC that is documented in a couple of places (Bug45174 and Bug43328). This is a bug in the "configure" phase of the standard "configure;make;make install" but shows up in the "make" stage. So, I'll revisit this as time permits and see about getting the latest GCC 4.6 series working.
For the GCC 4.3.2 version here is a test showing it compiling a quick test.
$ cat > test.c
int main (){return 0;}
Ctrl-D
$ ./arm-elf-gcc -Os -S test.c
$ cat test.s
.file "test.c"
.text
.align 2
.global main
.type main, %function
main:
@ args = 0, pretend = 0, frame = 0
@ frame_needed = 0, uses_anonymous_args = 0
@ link register save eliminated.
mov r0, #0
bx lr
.size main, .-main
.ident "GCC: (GNU) 4.3.2"
For the GCC 4.4.6 version here is a test showing it compiling a quick test.
$ cat > test.c
int main (){return 0;}
Ctrl-D
$ ./arm-elf-gcc -Os -S test.c
$ cat test.s
.file "test.c"
.text
.align 2
.global main
.type main, %function
main:
@ args = 0, pretend = 0, frame = 0
@ frame_needed = 0, uses_anonymous_args = 0
@ link register save eliminated.
mov r0, #0
bx lr
.size main, .-main
.ident "GCC: (GNU) 4.4.6"
While this is good news, it is not an ARM executable. My followup post will not be so upbeat.
Cross-compiler toolchain update
I've been working on building a toolchain using the notes from OpenHardware Building the ARM GNU 4.3.2 with some success. I finally got the base set of GCC 4.3.2 tools to build successfully. I have not used the resulting GCC to produce an ARM executable or verified the executable works on the Black Armor NAS. Those are tests for tomorrow evening when I can get the NAS setup again on the network. It is currently in a box in the corner.
There were several minor things that needed to be updated and modified to get the scripts and environment to work. I've kept careful notes and will post those in the next couple of days once I've tested the output from the compiler works. I'm also attempting to update the versions of the libraries and software to more current versions as well. The GCC 4.3.2 and associated libraries are several years old and I'm trying to get the GCC 4.6.x to build along with newer newlib, binutils and insight/gdb using the same basic set of notes and scripts. I bumped into a zlib issues in the second phase GCC build that stumped me for the night. I'll hit it again tomorrow. Again, I'm keeping careful notes and build docs for the newer version as well.
The dependencies from the operating system are sometimes a pain to track down for the software. I picked a very stripped down install of Ubuntu. The operating system I am using is Ubuntu Server 11.10 because it is easy to install and update. Any Linux would do but the package names may change. Ubuntu Server has no frills so you add everything you need which means all the libraries like GPM, etc.
So there is some progress and in the next couple of days I'll let you know if the build produces working ARM executables. I'm really excited about getting a working "HelloWorld" out there.
There were several minor things that needed to be updated and modified to get the scripts and environment to work. I've kept careful notes and will post those in the next couple of days once I've tested the output from the compiler works. I'm also attempting to update the versions of the libraries and software to more current versions as well. The GCC 4.3.2 and associated libraries are several years old and I'm trying to get the GCC 4.6.x to build along with newer newlib, binutils and insight/gdb using the same basic set of notes and scripts. I bumped into a zlib issues in the second phase GCC build that stumped me for the night. I'll hit it again tomorrow. Again, I'm keeping careful notes and build docs for the newer version as well.
The dependencies from the operating system are sometimes a pain to track down for the software. I picked a very stripped down install of Ubuntu. The operating system I am using is Ubuntu Server 11.10 because it is easy to install and update. Any Linux would do but the package names may change. Ubuntu Server has no frills so you add everything you need which means all the libraries like GPM, etc.
So there is some progress and in the next couple of days I'll let you know if the build produces working ARM executables. I'm really excited about getting a working "HelloWorld" out there.
Labels:
ARM,
black armor,
linux,
NAS,
toolchain
Location:
Raleigh, NC, USA
Tuesday, November 15, 2011
Building the GNU ARM Toolchain: Part 2
Lesson learned on doing a toolchain build or anything else for that matter; make sure you are reading the most current documentation available. I was working with very old versions of the software by using the GNU ARM website mentioned earlier. In digging into the problems for those builds, I found a few other sites that have detailed discussions on building the toolchain for specific version of the ARM platform.
One of interest is the OpenHardware Building the ARM GNU 4.3.2 that has lots of useful hints. Also, the Installing Gnuarm ARM Toolchain on Ubuntu 9.04 had some useful notes as well. Both are very different approaches and for very different ARM platforms but they have lots of useful notes on how to create a toolchain. I'll probably use something in the middle between the two to get the job done.
So I lost some time with old versions but learned a good bit about the software in the process.
The tools for the toolchain are:
binutil: low level tools for manipulating object code such as the assembler and linker tools
gcc: GNU compilers which has C/C++ and other tools related to compilation of code to object code
gdb & insight: debugger allowing for finding problems in code
newlib: a standard C library for embedded systems
Each tool builds forward until you have a complete toolchain that allows for creating programs for the platform. We are doing something even more interesting called cross-platform compilations. Since I don't have an ARM platform that I want to build a C compiler on and all the other tools, I am building those tools on my Intel Laptop under Ubuntu 10 LTS. This means my Intel x86_32 processor will be running a compiler that will output an ARM executable. So the compiler is built to run on x86 but produce ARM. This is cross platform compilation and is very typical for embedded systems work.
The problem I'm encountering is that I don't want to just use an existing binary build of the toolchain but produce one myself. The age of the toolchain provided by Seagate is such that even if I wanted to find those versions of the software, I'm unlikely to find them and get them working. So I'm looking for build a newer version of the compiler and see if the software will run afterwards.
My first program produced will likely be just a simple HelloWord app but from small things come larger ones. Building OpenSSH, rsync and other tools will follow quickly if I get the toolchain up and running.
Monday, November 14, 2011
Building the GNU ARM Toolchain: Part 1
I found the GNU ARM Toolchain website awhile back and that they have several different versions out there for the toolchain. A toolchain is just the basic tools needed to build software. In this case it is the standard libraries, the compiler, debugger and various other tools needed to write software. The version of the toolchain that is provided by Seagate is version 3.0 and a very old version. The oldest on the ARM website is 3.3 with 4.1 being the newest.
I've pulled down the 3.4, 4.0, and 4.1. The 4.1 does not compile on my which is Ubuntu 10 LTS and may have to do with x86 versus x86_64 differences. I'm not sure enough to diagnose yet so I dropped back a version to 4.0 and will check again. The error in the 4.1 build was in the assembly code opcode section so that isn't an area I want to try debugging at this point. The version 4.0 of the toolchain software is building cleanly for the initial libraries then failed in the GCC compiler. Sometime it pays to drop back a version to avoid bleeding edge pain and sometime you just find new pain.
Status:
So for 4.1: we have opcode errors in binutil which is very early in the build process and the errors look nasty.
For 4.0, we have an error in GCC "fcntl2.h:51: error: call to â__open_missing_modeâ declared with attribute error: open with O_CREAT" which looks like an error encountered before from some quick google searches. Not spent enough time yet to consider this a loss.
Building the toolchain makes it easier to understand the toolchain provided by Seagate even if it is an older version. Once I get done, I'll write up some details on a the process when I get it working right.
Status:
So for 4.1: we have opcode errors in binutil which is very early in the build process and the errors look nasty.
For 4.0, we have an error in GCC "fcntl2.h:51: error: call to â__open_missing_modeâ declared with attribute error: open with O_CREAT" which looks like an error encountered before from some quick google searches. Not spent enough time yet to consider this a loss.
Building the toolchain makes it easier to understand the toolchain provided by Seagate even if it is an older version. Once I get done, I'll write up some details on a the process when I get it working right.
We'll see what happens with building the toolchains over the next day or so.
Tuesday, November 8, 2011
Sun Java SE for Embedded Systems (Jazelle DBX)
Earlier I mentioned a technology called "Jazelle DBX" for the ARM processor that allows for Java Bytecode eXecution (JBX) directly in the ARM hardware which should make it run faster. That DBX technology is being phased out with newer Thumb-2 instruction set being the new preference by ARM for acceleration. However, the processor in the BlackArmor NAS was the first processor to have this Jazelle DBX feature and I want to see if it has any merit. I did some digging around, like I mentioned I would, and found that Sun had produced a version of Java that may have this technology.
Sun has two small version of Java. Once is called Java ME (micro-edition) and the other is Java SE for Embedded. The micro-edition was only for really tightly constrained environments like older cell phones that only had 8MB to 16MB of RAM. It was a feature reduced subset of Java with lots of limitations to make it work in that environment. Remember my old Motorola Razor cellphone from earlier posts and that is where this version of Java lived. That version of Java was crippled and never really seemed to take off. On the other hand that version of Java is in some of our BlueRay players so it wasn't all bad. We just don't want this version of Java on the BlackArmor as it doesn't give us any interesting things other than cell phone Tetris.
The other version of Java is called "Sun Java SE for Embedded" and lifts many of the limitations of the ME version of Java. It is a mostly full implementation of Java and allows for most libraries to be used from the reading. The downside is that it requires licensing when being used by a business. Fortunately, development work is free. I pulled a copy of the software as a tarball from Sun and will be taking a look at it when I get an environments setup and some times to play. It has some requirements that may make is hard to use as it takes a minimum of 32MB of RAM per virtual-machine. Remember that the BlackArmor, only has 128MB total of RAM so that is a quite a lot of memory for just one Java VM.
We'll have to see if this is even feasible but it may open up a huge number of possibilities when you look at the diversity of Java code running out there.
Well that was my fun reading for the evening. I hope you enjoyed my brain dump or at least found it tolerable. Anyone with experience in the area, please drop comment.
Sun has two small version of Java. Once is called Java ME (micro-edition) and the other is Java SE for Embedded. The micro-edition was only for really tightly constrained environments like older cell phones that only had 8MB to 16MB of RAM. It was a feature reduced subset of Java with lots of limitations to make it work in that environment. Remember my old Motorola Razor cellphone from earlier posts and that is where this version of Java lived. That version of Java was crippled and never really seemed to take off. On the other hand that version of Java is in some of our BlueRay players so it wasn't all bad. We just don't want this version of Java on the BlackArmor as it doesn't give us any interesting things other than cell phone Tetris.
The other version of Java is called "Sun Java SE for Embedded" and lifts many of the limitations of the ME version of Java. It is a mostly full implementation of Java and allows for most libraries to be used from the reading. The downside is that it requires licensing when being used by a business. Fortunately, development work is free. I pulled a copy of the software as a tarball from Sun and will be taking a look at it when I get an environments setup and some times to play. It has some requirements that may make is hard to use as it takes a minimum of 32MB of RAM per virtual-machine. Remember that the BlackArmor, only has 128MB total of RAM so that is a quite a lot of memory for just one Java VM.
We'll have to see if this is even feasible but it may open up a huge number of possibilities when you look at the diversity of Java code running out there.
Well that was my fun reading for the evening. I hope you enjoyed my brain dump or at least found it tolerable. Anyone with experience in the area, please drop comment.
Sunday, November 6, 2011
Reading on ARM Architecture
So earlier I was digging around trying to find out more about the Black Armor NAS hardware and pulled some interesting information. Unfortunately, I don't have a lot of ARM background so a good bit of it was confusing as I reviewed it.
Snippet from earlier hardware information gathering:
$ uname -a
Linux NAS3 2.6.22.18 #1 Thu Aug 26 12:26:10 CST 2010 v0.0.8 armv5tejl unknown
$ cat /proc/cpuinfo
Processor : ARM926EJ-S rev 1 (v5l)
To rectify my lack of knowledge I started reading on Wikipedia and found the ARM architecture which made me realize that I've been missing out on an entirely different ecology of technologic innovation. The features that are available for each processor was an interesting ride down memory lane with my memory of Intel CPU features, that I'm familiar with, running parallel to the ARM decisions in the same areas. They have two completely different paths but seem to have interchange between the two. ARM has an interesting history as a company as well.
So, I found information on the processor on the Wikipedia page for List of ARM Cores and the earlier reading on architecture helped me understand the differences between the Family, Arch and Core. Again, interesting ecology of processor technology.
ARM Family: ARM9E
ARM Architecture: ARMv5TEJ
ARM Core: ARM926EJ-S
Features: Thumb, Jazelle DBX, Enhanced DSP instructions
Cache (I/D), MMU: variable, TCMs, MMU
Typical MIPS@MHz: 220 MIPS @ 200 MHz
This helps me understand what I will need in a toolchain and setup for that environment. Earlier earlier I was not even aware that I was missing most of this background information. The ARMv# versus the ARM##XXX were confusing me but now I see the difference.
From the ARMv5TEJ, the "T" is the Thumb instruction set which is a subset of the overall ARM instructions optimized for performance by reducing some features.
T: Thumb Instructure Set support
So, the "J" in the ARMv5TEJ means we have "Jazelle" support. This feature initially stood out for me as it is direct execution of Java Bytecode against the underlying hardware. This could have be useful if a small Java VM could take advantage of the hardware but it looks like a dead-end since it is a closed implementation. It has also been made less relevant with the Thumb-2 implementation and it depends on the specific implementation if it is real hardware support or not now. It is interesting to see the "Jazelle" feature was first implemented on this particular CPU. I'll have to do more reading on it to see if anyone actually got a JVM running with hardware support.
J: Jazelle support
The "E" means Enhanced DSP support and that may be used for the streaming media or not. TDMI is implied by the E as well which gives us support for:
E: Enhanced DSP (digitial signal processing) support
T: Thumb Instructure Set support
D: JTAG debug support
M: Enhanced multiplier support
I: EmbeddedICE support
That is interesting in that each of those features can be enabled during compilation of software and might be used to improve performance.
Snippet from earlier hardware information gathering:
$ uname -a
Linux NAS3 2.6.22.18 #1 Thu Aug 26 12:26:10 CST 2010 v0.0.8 armv5tejl unknown
$ cat /proc/cpuinfo
Processor : ARM926EJ-S rev 1 (v5l)
To rectify my lack of knowledge I started reading on Wikipedia and found the ARM architecture which made me realize that I've been missing out on an entirely different ecology of technologic innovation. The features that are available for each processor was an interesting ride down memory lane with my memory of Intel CPU features, that I'm familiar with, running parallel to the ARM decisions in the same areas. They have two completely different paths but seem to have interchange between the two. ARM has an interesting history as a company as well.
So, I found information on the processor on the Wikipedia page for List of ARM Cores and the earlier reading on architecture helped me understand the differences between the Family, Arch and Core. Again, interesting ecology of processor technology.
ARM Family: ARM9E
ARM Architecture: ARMv5TEJ
ARM Core: ARM926EJ-S
Features: Thumb, Jazelle DBX, Enhanced DSP instructions
Cache (I/D), MMU: variable, TCMs, MMU
Typical MIPS@MHz: 220 MIPS @ 200 MHz
This helps me understand what I will need in a toolchain and setup for that environment. Earlier earlier I was not even aware that I was missing most of this background information. The ARMv# versus the ARM##XXX were confusing me but now I see the difference.
From the ARMv5TEJ, the "T" is the Thumb instruction set which is a subset of the overall ARM instructions optimized for performance by reducing some features.
T: Thumb Instructure Set support
So, the "J" in the ARMv5TEJ means we have "Jazelle" support. This feature initially stood out for me as it is direct execution of Java Bytecode against the underlying hardware. This could have be useful if a small Java VM could take advantage of the hardware but it looks like a dead-end since it is a closed implementation. It has also been made less relevant with the Thumb-2 implementation and it depends on the specific implementation if it is real hardware support or not now. It is interesting to see the "Jazelle" feature was first implemented on this particular CPU. I'll have to do more reading on it to see if anyone actually got a JVM running with hardware support.
J: Jazelle support
The "E" means Enhanced DSP support and that may be used for the streaming media or not. TDMI is implied by the E as well which gives us support for:
E: Enhanced DSP (digitial signal processing) support
T: Thumb Instructure Set support
D: JTAG debug support
M: Enhanced multiplier support
I: EmbeddedICE support
That is interesting in that each of those features can be enabled during compilation of software and might be used to improve performance.
Subscribe to:
Posts (Atom)