I found my old Samsung x426 USB cell phone programmer cable as I was cleaning out my home office. I might have mentioned this cable earlier when I wrote about adding a serial console to the BlackArmor NAS. The cable is a really old style one that has a weird connector for the cell phone way before the mini-USB became standard. The interesting thing about this cable is that it has a serial to USB converter chip (2303HXC 0546) that does the magic of converting serial to USB. That is why that thing was ridiculously expensive ($35) when I bought it back in the day.
Why this is even on my radar as possible is that the website CrapNAS had an entry for how to connect a serial or USB cable so you can watch the Linux boot up as a serial console session. They specify two different ways to do this. For the serial connection, they specify a MAX3232 as necessary. The USB connection has a schematic that includes a 2303HX. I'm not sure what the difference is between that and my 2303HXC so I will be doing some reading before I cut into the cable and get out the soldering iron.
Why am I even messing around with a serial console? Because as I am planning to mess around with the lower level system, I should have a back out plan if I do something wrong. A serial console on the device gives me more options during the boot up even if I cannot connect via a network connection.
As far as getting time to work on the compiler toolchain, I've been relaxing over the holiday break with family, cleaning my messy home office and have not even booted up the virtual machine since my last post. I intend to get back to it someday.
Wednesday, December 28, 2011
Thursday, December 22, 2011
Seagate Black Armor NAS
Yeah it's been hectic at work and lots to do in domestic life as well so this hobby project hasn't gotten much attention. Christmas break is coming however and I'm planning on getting some time to work on it. I think most Open Source projects or hobbies get a boost over the holidays.
Merry Xmas & Happy New Year in case I don't get back here before the holidays.
Merry Xmas & Happy New Year in case I don't get back here before the holidays.
Sunday, December 4, 2011
Newlib error during compile
Earlier I was working on the problem with compiling using the arm-elf-gcc on very basic programs. After some work, I found the base problem is that the libc replacement from newlib are not being picked up at compile time and properly using the syscalls mechanism. Hardwired hacks to work around it produced an executable but something is still wrong with how the compiler was built.
I've distilled the error down to a single search.
Google Search: "/lib/libc.a(lib_a-exit.o): In function `exit':" "newlib/libc/stdlib/exit.c:65: undefined reference to `_exit'"
Several other people appear to have run into the same problem, so I'll be reading up on the problem and see what solutions exist.
I've distilled the error down to a single search.
Google Search: "/lib/libc.a(lib_a-exit.o): In function `exit':" "newlib/libc/stdlib/exit.c:65: undefined reference to `_exit'"
Several other people appear to have run into the same problem, so I'll be reading up on the problem and see what solutions exist.
Saturday, December 3, 2011
Compilation failures
The problem is that I can get an ARM executable but not sure why the default system is not working to produce a basic HelloWorld.
$ cat test.c
int main (){return 0;}
/home/mcgarrah/DevelToolbin/binaries/arm-4.4.6/bin/../lib/gcc/arm-elf/4.4.6/../../../../arm-elf/lib/libc.a(lib_a-exit.o): In function `exit':
exit.c:(.text+0x54): undefined reference to `_exit'
collect2: ld returned 1 exit status
$ cat test.c
int main (){return 0;}
$ arm-elf-gcc test.c -o test
.../home/mcgarrah/DevelToolbin/binaries/arm-4.4.6/bin/../lib/gcc/arm-elf/4.4.6/../../../../arm-elf/lib/libc.a(lib_a-exit.o): In function `exit':
exit.c:(.text+0x54): undefined reference to `_exit'
collect2: ld returned 1 exit status
$ arm-elf-gcc test.c -o test -v
This spews a couple of pages of additional output which points me to something called the "Using built-in specs" and lots of directory path information for various things like include files and libraries. These all look about right. A directory /DevelToolbin/binaries/arm-4.4.6/arm-elf/lib has some interesting lib files in it.
$ arm-elf-gcc test.c -o test ~/DevelToolbin/binaries/arm-4.4.6/arm-elf/lib/redboot-syscalls.o
$ file test
test: ELF 32-bit LSB executable, ARM, version 1, statically linked, not stripped
When you are facing a missing library like the above issue, you typically just have to find the right library and add it to your build. In this case we are missing something so fundamental that we need to figure out why it is broken or will hit future issues.
While finding the above syscall libraries, I noticed some files called specs files which are:
- linux.specs
- rdimon.specs
- rdpmon.specs
- redboot.specs
RDP, RDI, RedBoot and Linux are all syscall (system call) protocols. A syscall protocol describes how a libc (standard C library) communicates with the operating system kernel. For our case this library is newlib which uses another syscall protocol called libgloss as an interface between libc and the above syscall protocols. I'm not sure which protocols are the default but something is not right about this combination.
$ arm-elf-gcc -dumpspecs
This dumps even more output that is even more cryptic but looks important when taken with the above information. There are built-in defaults for GCC that are being overridden by these specs files.
There is an options to change the specs entries from the GCC command line which I use to identify what is happening.
$ arm-elf-gcc test.c -o test -specs=redboot.specs
$ arm-elf-gcc test.c -o test -specs=pid.specs
$ arm-elf-gcc test.c -o test -specs=linux.specs
The pids and redboot files are a very minor difference in a setting so are essentially the same. Linux is significantly different as a system call interface.
$ arm-elf-gcc test.c -o test -specs=rdimon.specs
$ arm-elf-gcc test.c -o test -specs=rdpmon.specs
Both the RDI and RDP return basically the same error as the above missing library. So we have identified what is probably the default libraries used by libgloss.
This is just a journey down the rabbit hole. So I need to revisit the newlib build process and see what I did wrong in it.
Update December 4th, 2011 7:30pm: What you see above happens in both the gcc 4.4.6 and gcc 4.3.2 compiled software sets. Same exact problem for both versions of gcc and associated other libraries. I did a full rebuild of the 4.3.2 and did some tracing of the path built into gcc and re-verified paths to all libraries. Same issue still happens. So we have a genuine problem in the build process that is impacting the compiler. Could be in any place across the binutils, newlib or gcc compilations. So I'll be digging into each. The nice part is it looks like a common problem so if I fix it in the 4.3.2 series then it will probably fix in the 4.4.6 series as well.
This is just a journey down the rabbit hole. So I need to revisit the newlib build process and see what I did wrong in it.
Update December 4th, 2011 7:30pm: What you see above happens in both the gcc 4.4.6 and gcc 4.3.2 compiled software sets. Same exact problem for both versions of gcc and associated other libraries. I did a full rebuild of the 4.3.2 and did some tracing of the path built into gcc and re-verified paths to all libraries. Same issue still happens. So we have a genuine problem in the build process that is impacting the compiler. Could be in any place across the binutils, newlib or gcc compilations. So I'll be digging into each. The nice part is it looks like a common problem so if I fix it in the 4.3.2 series then it will probably fix in the 4.4.6 series as well.
Thursday, December 1, 2011
Toolchains compiled
Two full toolchains built and a third that I still think might be made to work. The first is using older versions of everything and was mostly done as a test to get the build environment working against known sources that are known to build. Even this known build process required some effort to get working in a current OS environment. Those docs, notes and scripts will be coming in the near future.
So to outline what works and not, I give you the following.
Toolchain that comes from older versions of software and the docs from Tom Walsh:
While this is good news, it is not an ARM executable. My followup post will not be so upbeat.
So to outline what works and not, I give you the following.
Toolchain that comes from older versions of software and the docs from Tom Walsh:
- binutils-2.19.1a.tar.bz2
- gcc-4.3.2.tar.bz2 (with a patch from Tom)
- newlib-1.16.0.tar.gz (with a patch from Tom)
- insight-weekly-CVS-7.0.50-20091130.tar.bz2
Newest versions that compiled based on Tom's scripts:
- binutils-2.22.tar.bz2
- gcc-4.4.6.tar.bz2
- newlib-1.19.0.tar.gz
- insight-CVS-20111130.tar.bz2 (pulled from CVS head and required patching by me)
Newest versions that fails to compile in GCC in zlib:
The more exciting thing is that it looks like both GCC versions that compiled will compile code to an intermediate state. That is not proof that it generates a working executable but it is a step in the right direction.
- binutils-2.22.tar.bz2
- gcc-4.6.2.tar.bz2
- newlib-1.19.0.tar.gz
- insight-CVS-20111130.tar.bz2 (pulled from CVS head and required patching by me)
The issue in GCC is well documented (if you know what you are looking for) as a bug in the "--enabled-multilib" during the build. The zlib library that is packaged with the GCC source fails to build in a cross-compiled configuration. Who knew that GCC packages their own copy of zlib in the GCC sources? There appear to be a couple of solutions which might fix the problem. The first is to just use the native zlib from the host system and pass in "--with-system-zlib" but that feels like a hack instead of a fix. The other is to revert a change in GCC that is documented in a couple of places (Bug45174 and Bug43328). This is a bug in the "configure" phase of the standard "configure;make;make install" but shows up in the "make" stage. So, I'll revisit this as time permits and see about getting the latest GCC 4.6 series working.
For the GCC 4.3.2 version here is a test showing it compiling a quick test.
$ cat > test.c
int main (){return 0;}
Ctrl-D
$ ./arm-elf-gcc -Os -S test.c
$ cat test.s
.file "test.c"
.text
.align 2
.global main
.type main, %function
main:
@ args = 0, pretend = 0, frame = 0
@ frame_needed = 0, uses_anonymous_args = 0
@ link register save eliminated.
mov r0, #0
bx lr
.size main, .-main
.ident "GCC: (GNU) 4.3.2"
For the GCC 4.4.6 version here is a test showing it compiling a quick test.
$ cat > test.c
int main (){return 0;}
Ctrl-D
$ ./arm-elf-gcc -Os -S test.c
$ cat test.s
.file "test.c"
.text
.align 2
.global main
.type main, %function
main:
@ args = 0, pretend = 0, frame = 0
@ frame_needed = 0, uses_anonymous_args = 0
@ link register save eliminated.
mov r0, #0
bx lr
.size main, .-main
.ident "GCC: (GNU) 4.4.6"
While this is good news, it is not an ARM executable. My followup post will not be so upbeat.
Cross-compiler toolchain update
I've been working on building a toolchain using the notes from OpenHardware Building the ARM GNU 4.3.2 with some success. I finally got the base set of GCC 4.3.2 tools to build successfully. I have not used the resulting GCC to produce an ARM executable or verified the executable works on the Black Armor NAS. Those are tests for tomorrow evening when I can get the NAS setup again on the network. It is currently in a box in the corner.
There were several minor things that needed to be updated and modified to get the scripts and environment to work. I've kept careful notes and will post those in the next couple of days once I've tested the output from the compiler works. I'm also attempting to update the versions of the libraries and software to more current versions as well. The GCC 4.3.2 and associated libraries are several years old and I'm trying to get the GCC 4.6.x to build along with newer newlib, binutils and insight/gdb using the same basic set of notes and scripts. I bumped into a zlib issues in the second phase GCC build that stumped me for the night. I'll hit it again tomorrow. Again, I'm keeping careful notes and build docs for the newer version as well.
The dependencies from the operating system are sometimes a pain to track down for the software. I picked a very stripped down install of Ubuntu. The operating system I am using is Ubuntu Server 11.10 because it is easy to install and update. Any Linux would do but the package names may change. Ubuntu Server has no frills so you add everything you need which means all the libraries like GPM, etc.
So there is some progress and in the next couple of days I'll let you know if the build produces working ARM executables. I'm really excited about getting a working "HelloWorld" out there.
There were several minor things that needed to be updated and modified to get the scripts and environment to work. I've kept careful notes and will post those in the next couple of days once I've tested the output from the compiler works. I'm also attempting to update the versions of the libraries and software to more current versions as well. The GCC 4.3.2 and associated libraries are several years old and I'm trying to get the GCC 4.6.x to build along with newer newlib, binutils and insight/gdb using the same basic set of notes and scripts. I bumped into a zlib issues in the second phase GCC build that stumped me for the night. I'll hit it again tomorrow. Again, I'm keeping careful notes and build docs for the newer version as well.
The dependencies from the operating system are sometimes a pain to track down for the software. I picked a very stripped down install of Ubuntu. The operating system I am using is Ubuntu Server 11.10 because it is easy to install and update. Any Linux would do but the package names may change. Ubuntu Server has no frills so you add everything you need which means all the libraries like GPM, etc.
So there is some progress and in the next couple of days I'll let you know if the build produces working ARM executables. I'm really excited about getting a working "HelloWorld" out there.
Labels:
ARM,
black armor,
linux,
NAS,
toolchain
Location:
Raleigh, NC, USA
Monday, November 28, 2011
Making Sea Salt
I did something completely different. I didn't boot my laptop the entire Thanksgiving weekend.
I went to the beach, ate steamed oysters, grabbed several gallons of water, spent time with my wife, filtered the water, watched television, and boiled the water to get sea salt. I now have a nice little container of sea salt flakes from my favorite beach and a salt slurry.
I read a couple of web pages on doing this and most of what they say is common sense. Filter the water to clean out the sand and other impurities. Boiling is recommended to kill off any nasty organisms. Getting the water from a beach that is not polluted is recommended. Also do not collect just after a rain. The resulting salt is a mixture of salt flakes similar to kosher salt and crystals like the fancy salt mills use. I may take the salt slurry and add a bit of water and put in a shallow pan to crystallize so it is nicer looking.
I'm considering making some of this into Christmas presents to family who like our little beach cottage.
Now that was something completely different.
I went to the beach, ate steamed oysters, grabbed several gallons of water, spent time with my wife, filtered the water, watched television, and boiled the water to get sea salt. I now have a nice little container of sea salt flakes from my favorite beach and a salt slurry.
I read a couple of web pages on doing this and most of what they say is common sense. Filter the water to clean out the sand and other impurities. Boiling is recommended to kill off any nasty organisms. Getting the water from a beach that is not polluted is recommended. Also do not collect just after a rain. The resulting salt is a mixture of salt flakes similar to kosher salt and crystals like the fancy salt mills use. I may take the salt slurry and add a bit of water and put in a shallow pan to crystallize so it is nicer looking.
I'm considering making some of this into Christmas presents to family who like our little beach cottage.
Now that was something completely different.
Tuesday, November 15, 2011
Building the GNU ARM Toolchain: Part 2
Lesson learned on doing a toolchain build or anything else for that matter; make sure you are reading the most current documentation available. I was working with very old versions of the software by using the GNU ARM website mentioned earlier. In digging into the problems for those builds, I found a few other sites that have detailed discussions on building the toolchain for specific version of the ARM platform.
One of interest is the OpenHardware Building the ARM GNU 4.3.2 that has lots of useful hints. Also, the Installing Gnuarm ARM Toolchain on Ubuntu 9.04 had some useful notes as well. Both are very different approaches and for very different ARM platforms but they have lots of useful notes on how to create a toolchain. I'll probably use something in the middle between the two to get the job done.
So I lost some time with old versions but learned a good bit about the software in the process.
The tools for the toolchain are:
binutil: low level tools for manipulating object code such as the assembler and linker tools
gcc: GNU compilers which has C/C++ and other tools related to compilation of code to object code
gdb & insight: debugger allowing for finding problems in code
newlib: a standard C library for embedded systems
Each tool builds forward until you have a complete toolchain that allows for creating programs for the platform. We are doing something even more interesting called cross-platform compilations. Since I don't have an ARM platform that I want to build a C compiler on and all the other tools, I am building those tools on my Intel Laptop under Ubuntu 10 LTS. This means my Intel x86_32 processor will be running a compiler that will output an ARM executable. So the compiler is built to run on x86 but produce ARM. This is cross platform compilation and is very typical for embedded systems work.
The problem I'm encountering is that I don't want to just use an existing binary build of the toolchain but produce one myself. The age of the toolchain provided by Seagate is such that even if I wanted to find those versions of the software, I'm unlikely to find them and get them working. So I'm looking for build a newer version of the compiler and see if the software will run afterwards.
My first program produced will likely be just a simple HelloWord app but from small things come larger ones. Building OpenSSH, rsync and other tools will follow quickly if I get the toolchain up and running.
Monday, November 14, 2011
Building the GNU ARM Toolchain: Part 1
I found the GNU ARM Toolchain website awhile back and that they have several different versions out there for the toolchain. A toolchain is just the basic tools needed to build software. In this case it is the standard libraries, the compiler, debugger and various other tools needed to write software. The version of the toolchain that is provided by Seagate is version 3.0 and a very old version. The oldest on the ARM website is 3.3 with 4.1 being the newest.
I've pulled down the 3.4, 4.0, and 4.1. The 4.1 does not compile on my which is Ubuntu 10 LTS and may have to do with x86 versus x86_64 differences. I'm not sure enough to diagnose yet so I dropped back a version to 4.0 and will check again. The error in the 4.1 build was in the assembly code opcode section so that isn't an area I want to try debugging at this point. The version 4.0 of the toolchain software is building cleanly for the initial libraries then failed in the GCC compiler. Sometime it pays to drop back a version to avoid bleeding edge pain and sometime you just find new pain.
Status:
So for 4.1: we have opcode errors in binutil which is very early in the build process and the errors look nasty.
For 4.0, we have an error in GCC "fcntl2.h:51: error: call to â__open_missing_modeâ declared with attribute error: open with O_CREAT" which looks like an error encountered before from some quick google searches. Not spent enough time yet to consider this a loss.
Building the toolchain makes it easier to understand the toolchain provided by Seagate even if it is an older version. Once I get done, I'll write up some details on a the process when I get it working right.
Status:
So for 4.1: we have opcode errors in binutil which is very early in the build process and the errors look nasty.
For 4.0, we have an error in GCC "fcntl2.h:51: error: call to â__open_missing_modeâ declared with attribute error: open with O_CREAT" which looks like an error encountered before from some quick google searches. Not spent enough time yet to consider this a loss.
Building the toolchain makes it easier to understand the toolchain provided by Seagate even if it is an older version. Once I get done, I'll write up some details on a the process when I get it working right.
We'll see what happens with building the toolchains over the next day or so.
Tuesday, November 8, 2011
Sun Java SE for Embedded Systems (Jazelle DBX)
Earlier I mentioned a technology called "Jazelle DBX" for the ARM processor that allows for Java Bytecode eXecution (JBX) directly in the ARM hardware which should make it run faster. That DBX technology is being phased out with newer Thumb-2 instruction set being the new preference by ARM for acceleration. However, the processor in the BlackArmor NAS was the first processor to have this Jazelle DBX feature and I want to see if it has any merit. I did some digging around, like I mentioned I would, and found that Sun had produced a version of Java that may have this technology.
Sun has two small version of Java. Once is called Java ME (micro-edition) and the other is Java SE for Embedded. The micro-edition was only for really tightly constrained environments like older cell phones that only had 8MB to 16MB of RAM. It was a feature reduced subset of Java with lots of limitations to make it work in that environment. Remember my old Motorola Razor cellphone from earlier posts and that is where this version of Java lived. That version of Java was crippled and never really seemed to take off. On the other hand that version of Java is in some of our BlueRay players so it wasn't all bad. We just don't want this version of Java on the BlackArmor as it doesn't give us any interesting things other than cell phone Tetris.
The other version of Java is called "Sun Java SE for Embedded" and lifts many of the limitations of the ME version of Java. It is a mostly full implementation of Java and allows for most libraries to be used from the reading. The downside is that it requires licensing when being used by a business. Fortunately, development work is free. I pulled a copy of the software as a tarball from Sun and will be taking a look at it when I get an environments setup and some times to play. It has some requirements that may make is hard to use as it takes a minimum of 32MB of RAM per virtual-machine. Remember that the BlackArmor, only has 128MB total of RAM so that is a quite a lot of memory for just one Java VM.
We'll have to see if this is even feasible but it may open up a huge number of possibilities when you look at the diversity of Java code running out there.
Well that was my fun reading for the evening. I hope you enjoyed my brain dump or at least found it tolerable. Anyone with experience in the area, please drop comment.
Sun has two small version of Java. Once is called Java ME (micro-edition) and the other is Java SE for Embedded. The micro-edition was only for really tightly constrained environments like older cell phones that only had 8MB to 16MB of RAM. It was a feature reduced subset of Java with lots of limitations to make it work in that environment. Remember my old Motorola Razor cellphone from earlier posts and that is where this version of Java lived. That version of Java was crippled and never really seemed to take off. On the other hand that version of Java is in some of our BlueRay players so it wasn't all bad. We just don't want this version of Java on the BlackArmor as it doesn't give us any interesting things other than cell phone Tetris.
The other version of Java is called "Sun Java SE for Embedded" and lifts many of the limitations of the ME version of Java. It is a mostly full implementation of Java and allows for most libraries to be used from the reading. The downside is that it requires licensing when being used by a business. Fortunately, development work is free. I pulled a copy of the software as a tarball from Sun and will be taking a look at it when I get an environments setup and some times to play. It has some requirements that may make is hard to use as it takes a minimum of 32MB of RAM per virtual-machine. Remember that the BlackArmor, only has 128MB total of RAM so that is a quite a lot of memory for just one Java VM.
We'll have to see if this is even feasible but it may open up a huge number of possibilities when you look at the diversity of Java code running out there.
Well that was my fun reading for the evening. I hope you enjoyed my brain dump or at least found it tolerable. Anyone with experience in the area, please drop comment.
Sunday, November 6, 2011
Reading on ARM Architecture
So earlier I was digging around trying to find out more about the Black Armor NAS hardware and pulled some interesting information. Unfortunately, I don't have a lot of ARM background so a good bit of it was confusing as I reviewed it.
Snippet from earlier hardware information gathering:
$ uname -a
Linux NAS3 2.6.22.18 #1 Thu Aug 26 12:26:10 CST 2010 v0.0.8 armv5tejl unknown
$ cat /proc/cpuinfo
Processor : ARM926EJ-S rev 1 (v5l)
To rectify my lack of knowledge I started reading on Wikipedia and found the ARM architecture which made me realize that I've been missing out on an entirely different ecology of technologic innovation. The features that are available for each processor was an interesting ride down memory lane with my memory of Intel CPU features, that I'm familiar with, running parallel to the ARM decisions in the same areas. They have two completely different paths but seem to have interchange between the two. ARM has an interesting history as a company as well.
So, I found information on the processor on the Wikipedia page for List of ARM Cores and the earlier reading on architecture helped me understand the differences between the Family, Arch and Core. Again, interesting ecology of processor technology.
ARM Family: ARM9E
ARM Architecture: ARMv5TEJ
ARM Core: ARM926EJ-S
Features: Thumb, Jazelle DBX, Enhanced DSP instructions
Cache (I/D), MMU: variable, TCMs, MMU
Typical MIPS@MHz: 220 MIPS @ 200 MHz
This helps me understand what I will need in a toolchain and setup for that environment. Earlier earlier I was not even aware that I was missing most of this background information. The ARMv# versus the ARM##XXX were confusing me but now I see the difference.
From the ARMv5TEJ, the "T" is the Thumb instruction set which is a subset of the overall ARM instructions optimized for performance by reducing some features.
T: Thumb Instructure Set support
So, the "J" in the ARMv5TEJ means we have "Jazelle" support. This feature initially stood out for me as it is direct execution of Java Bytecode against the underlying hardware. This could have be useful if a small Java VM could take advantage of the hardware but it looks like a dead-end since it is a closed implementation. It has also been made less relevant with the Thumb-2 implementation and it depends on the specific implementation if it is real hardware support or not now. It is interesting to see the "Jazelle" feature was first implemented on this particular CPU. I'll have to do more reading on it to see if anyone actually got a JVM running with hardware support.
J: Jazelle support
The "E" means Enhanced DSP support and that may be used for the streaming media or not. TDMI is implied by the E as well which gives us support for:
E: Enhanced DSP (digitial signal processing) support
T: Thumb Instructure Set support
D: JTAG debug support
M: Enhanced multiplier support
I: EmbeddedICE support
That is interesting in that each of those features can be enabled during compilation of software and might be used to improve performance.
Snippet from earlier hardware information gathering:
$ uname -a
Linux NAS3 2.6.22.18 #1 Thu Aug 26 12:26:10 CST 2010 v0.0.8 armv5tejl unknown
$ cat /proc/cpuinfo
Processor : ARM926EJ-S rev 1 (v5l)
To rectify my lack of knowledge I started reading on Wikipedia and found the ARM architecture which made me realize that I've been missing out on an entirely different ecology of technologic innovation. The features that are available for each processor was an interesting ride down memory lane with my memory of Intel CPU features, that I'm familiar with, running parallel to the ARM decisions in the same areas. They have two completely different paths but seem to have interchange between the two. ARM has an interesting history as a company as well.
So, I found information on the processor on the Wikipedia page for List of ARM Cores and the earlier reading on architecture helped me understand the differences between the Family, Arch and Core. Again, interesting ecology of processor technology.
ARM Family: ARM9E
ARM Architecture: ARMv5TEJ
ARM Core: ARM926EJ-S
Features: Thumb, Jazelle DBX, Enhanced DSP instructions
Cache (I/D), MMU: variable, TCMs, MMU
Typical MIPS@MHz: 220 MIPS @ 200 MHz
This helps me understand what I will need in a toolchain and setup for that environment. Earlier earlier I was not even aware that I was missing most of this background information. The ARMv# versus the ARM##XXX were confusing me but now I see the difference.
From the ARMv5TEJ, the "T" is the Thumb instruction set which is a subset of the overall ARM instructions optimized for performance by reducing some features.
T: Thumb Instructure Set support
So, the "J" in the ARMv5TEJ means we have "Jazelle" support. This feature initially stood out for me as it is direct execution of Java Bytecode against the underlying hardware. This could have be useful if a small Java VM could take advantage of the hardware but it looks like a dead-end since it is a closed implementation. It has also been made less relevant with the Thumb-2 implementation and it depends on the specific implementation if it is real hardware support or not now. It is interesting to see the "Jazelle" feature was first implemented on this particular CPU. I'll have to do more reading on it to see if anyone actually got a JVM running with hardware support.
J: Jazelle support
The "E" means Enhanced DSP support and that may be used for the streaming media or not. TDMI is implied by the E as well which gives us support for:
E: Enhanced DSP (digitial signal processing) support
T: Thumb Instructure Set support
D: JTAG debug support
M: Enhanced multiplier support
I: EmbeddedICE support
That is interesting in that each of those features can be enabled during compilation of software and might be used to improve performance.
Wednesday, November 2, 2011
Black Armor status
So my list of things to figure out keeps growing but I don't seem to get any time to work on them.
1. UPS software setup
2. DLNA server functionality
3. USB Hub issue to figure out so I can run the UPS and Printer together.
1. UPS software setup
2. DLNA server functionality
3. USB Hub issue to figure out so I can run the UPS and Printer together.
4. Serial Port hack (new)
I also want to get the serial port hack working which requires some physical work disassembling the NAS and maybe some soldering work to build a serial converter. The major work is already done by another guy on http://crapnas.blogspot.com but I'd have to follow along. There also appears to be a shortcut with an old Nokia USB cell phone cable that might be worth checking out.
Maybe next weekend I'll get some time. Project is just not getting cycles but I'm still thinking about it.
Tuesday, October 25, 2011
USB Ports on NAS
So I'm working on the UPS addition and forgot I use that USB port for my printer sharing. So time to do some reading on USB hubs and the Black Armor NAS 110. The printer is important and the UPS is important. I wonder if I can mix my peanut butter and chocolate.
Even stranger, there is a USB port on the front of the box but that is special purposed for just USB Memory backups only.
So I have several paths here to check.
Even stranger, there is a USB port on the front of the box but that is special purposed for just USB Memory backups only.
So I have several paths here to check.
Saturday, October 22, 2011
Black Armor: A ToDo List - UPS and DLNA
I have two new quick projects that I need for my Black Armor NAS 110 in the immediate future.
First I need the UPS functionality for this device as I'm taking power hits at my residence and my old UPS' are just way too old and not working to keep the NAS running after a power blip. I bought an APC Back-UPS ES BE550G 550 VA 330 Watts at CostCo with one of their specials. The Black Armor NAS documentation said it only works with APC UPS so I thought I was okay but with further reading in the forums they say results vary with APC devices. So time to see if my new APC UPS will work with the built-in software or if the software needs improvements. So I hope to have a smart UPS running but just the dumb UPS functionality without a shutdown mode will have to do if I cannot get it working. The smart UPS depends on apcupsd 3.12.2 according to a post.
$ /usr/sbin/apcupsd --version
apcupsd 3.12.2 (18 January 2006) redhat
Next on the list is to figure out how functional the streaming media works on the device. This will hopefully be more straight forward than the UPS. Family time will be vastly improved if I can get the DVD collection running off this device. DLNA is an interesting subject and my BlueRay player hooked to my TV may be able to play movies off the NAS. That would be optimal.
First I need the UPS functionality for this device as I'm taking power hits at my residence and my old UPS' are just way too old and not working to keep the NAS running after a power blip. I bought an APC Back-UPS ES BE550G 550 VA 330 Watts at CostCo with one of their specials. The Black Armor NAS documentation said it only works with APC UPS so I thought I was okay but with further reading in the forums they say results vary with APC devices. So time to see if my new APC UPS will work with the built-in software or if the software needs improvements. So I hope to have a smart UPS running but just the dumb UPS functionality without a shutdown mode will have to do if I cannot get it working. The smart UPS depends on apcupsd 3.12.2 according to a post.
$ /usr/sbin/apcupsd --version
apcupsd 3.12.2 (18 January 2006) redhat
Next on the list is to figure out how functional the streaming media works on the device. This will hopefully be more straight forward than the UPS. Family time will be vastly improved if I can get the DVD collection running off this device. DLNA is an interesting subject and my BlueRay player hooked to my TV may be able to play movies off the NAS. That would be optimal.
Wednesday, October 19, 2011
Black Armor NAS Information
Here is the beginning of a dump of information on the Black Armor device from the Linux kernel and environment. From this I learned the processor type and features. I also got some pointers to cross-compiler options used. These will all be important later.
$ uname -a
Linux NAS3 2.6.22.18 #1 Thu Aug 26 12:26:10 CST 2010 v0.0.8 armv5tejl unknown
$ cat /proc/cpuinfo
Processor : ARM926EJ-S rev 1 (v5l)
BogoMIPS : 794.62
Features : swp half thumb fastmult edsp
CPU implementer : 0x56
CPU architecture: 5TE
CPU variant : 0x2
CPU part : 0x131
CPU revision : 1
Cache type : write-back
Cache clean : cp15 c7 ops
Cache lockdown : format C
Cache format : Harvard
I size : 16384
I assoc : 4
I line length : 32
I sets : 128
D size : 16384
D assoc : 4
D line length : 32
D sets : 128
Hardware : Feroceon-KW
Revision : 0000
Serial : 0000000000000000
$ cat kmsg
<5>Linux version 2.6.22.18 (root@jasonDev.localdomain) (gcc version 4.2.1) #1 Thu Aug 26 12:26:10 CST 2010 v0.0.8
<4>CPU: ARM926EJ-S [56251311] revision 1 (ARMv5TE), cr=00053977
<4>Machine: Feroceon-KW
<4> Marvell Development Board (LSP Version KW_LSP_4.2.7_patch21_with_rx_desc_tuned)-- MONO Soc: 88F6192 A1 LE
$ dmesg
... way too much stuff ...
Tuesday, October 18, 2011
Rsync on Black Armor NAS 110
I figured out something simple but neat on the Black Armor NAS 110 (BA-NAS110) device. It has rsync a powerful file-system replication tool from UNIX.
Caveats are that in order to do this you must have root on the device and a ssh connection with the command line. I'll write a friendly doc on how to get 'root' later. (Just search for Hajo Noerenberg's work on the subject sans the friendly write up if you want to do it now.)
So, the BA-NAS110 is capable of using rsync from the command line to replicate its data to another NAS or Linux system if you have root on the system. Getting it setup was simple enough but knowing that the rsync daemon and client were on the systems was the trick.
You have to create a rsyncd.conf file since there isn't one pre-built. Syntax is common to the typical rsync 3.0.4 version.
Hosting system
$ id
(root)
$ cat /root/rsyncd.conf
pid file = /var/run/rsyncd.pid
[rsyncftp]
path = /shares/Public
comment = rsyncftp
$ rsync --daemon --config=/root/rsyncd.conf
Client system (could be another BA-NAS110 or Linux)
$ id
(root)
$ rsync --progress --stats -v -t -r rsync://admin@/rsyncftp/* /shares/Public
... watch the good times roll ...
Note: Add the "-n" option to rsync on the client side for the initial test connection to put it in test mode without data copy. Remove "-n" when you actually want to copy data.
The transfer speed between two BA-NAS110 devices across a dedicated switch is about 6-8MB/s. I've read some comments about performance on these devices being dogs and that there tweaks that might help.
I don't have my toolchain setup for compiling native apps yet but getting all my data copied out of my old device to my new one was a pretty important step to playing around with the older one. So I figured someone else might benefit from this bit of lore.
Caveats are that in order to do this you must have root on the device and a ssh connection with the command line. I'll write a friendly doc on how to get 'root' later. (Just search for Hajo Noerenberg's work on the subject sans the friendly write up if you want to do it now.)
So, the BA-NAS110 is capable of using rsync from the command line to replicate its data to another NAS or Linux system if you have root on the system. Getting it setup was simple enough but knowing that the rsync daemon and client were on the systems was the trick.
You have to create a rsyncd.conf file since there isn't one pre-built. Syntax is common to the typical rsync 3.0.4 version.
Hosting system
$ id
(root)
$ cat /root/rsyncd.conf
pid file = /var/run/rsyncd.pid
[rsyncftp]
path = /shares/Public
comment = rsyncftp
$ rsync --daemon --config=/root/rsyncd.conf
Client system (could be another BA-NAS110 or Linux)
$ id
(root)
$ rsync --progress --stats -v -t -r rsync://admin@
... watch the good times roll ...
Note: Add the "-n" option to rsync on the client side for the initial test connection to put it in test mode without data copy. Remove "-n" when you actually want to copy data.
The transfer speed between two BA-NAS110 devices across a dedicated switch is about 6-8MB/s. I've read some comments about performance on these devices being dogs and that there tweaks that might help.
I don't have my toolchain setup for compiling native apps yet but getting all my data copied out of my old device to my new one was a pretty important step to playing around with the older one. So I figured someone else might benefit from this bit of lore.
Sunday, September 25, 2011
Seagate Black Armor 110 NAS
I found something fun.
The Seagate NAS (Network Attached Storage) that I've been using at my house is running an embedded Linux. A NAS is a big network hard drive you can share between computers. I got a root account on it and have found a whole world of fun that could be done in there. Root is the master administrative account for UNIX systems that let you do extra things beyond the normal.
First steps is getting a functional toolchain and then build some trivial tools. The goal would be to have a full set of GNU tools available in a package format for people to use. I want to publish a full working OpenSSH with scp support and rsync for this thing as a starting point. Maybe add some features for NFS. Just digging around on this thing reminded me how much I enjoy hacking on hardware.
A starting point is this gentleman who cracked open the hardware:
http://crapnas.blogspot.com/
The Seagate Support Forums are surprisingly useful:
http://forums.seagate.com/t5/BlackArmor-NAS-Network-Storage/bd-p/BlackArmorNAS
Hajo Noerenberg's work gives us root access and details on image format:
http://www.noerenberg.de/hajo/pub/seagate-blackarmor-nas.txt
http://www.noerenberg.de/hajo/pub/
Debian Lenny installed on 220 NAS:
http://forums.seagate.com/t5/BlackArmor-NAS-Network-Storage/Install-Debian-GNU-Linux-5-0-7-Lenny-on-the-Blackarmor-220-NAS/td-p/79422
I don't think I want a full Linux install but just extend the existing environment with additional tools that are useful. A full platform and OS would be too much hassle. Besides, someone else already has that glory.
I'll post more if I get time to wack on this.
The Seagate NAS (Network Attached Storage) that I've been using at my house is running an embedded Linux. A NAS is a big network hard drive you can share between computers. I got a root account on it and have found a whole world of fun that could be done in there. Root is the master administrative account for UNIX systems that let you do extra things beyond the normal.
First steps is getting a functional toolchain and then build some trivial tools. The goal would be to have a full set of GNU tools available in a package format for people to use. I want to publish a full working OpenSSH with scp support and rsync for this thing as a starting point. Maybe add some features for NFS. Just digging around on this thing reminded me how much I enjoy hacking on hardware.
A starting point is this gentleman who cracked open the hardware:
http://crapnas.blogspot.com/
The Seagate Support Forums are surprisingly useful:
http://forums.seagate.com/t5/BlackArmor-NAS-Network-Storage/bd-p/BlackArmorNAS
Hajo Noerenberg's work gives us root access and details on image format:
http://www.noerenberg.de/hajo/pub/seagate-blackarmor-nas.txt
http://www.noerenberg.de/hajo/pub/
Debian Lenny installed on 220 NAS:
http://forums.seagate.com/t5/BlackArmor-NAS-Network-Storage/Install-Debian-GNU-Linux-5-0-7-Lenny-on-the-Blackarmor-220-NAS/td-p/79422
I don't think I want a full Linux install but just extend the existing environment with additional tools that are useful. A full platform and OS would be too much hassle. Besides, someone else already has that glory.
I'll post more if I get time to wack on this.
Subscribe to:
Posts (Atom)