Sunday, July 10, 2011


Plastic chips come in a variety of epoxies. Some sample chips:

Chip 1 was in acid for a short amount of time so its not a completely fair visual, but it looked about the same before going in. Anyway, In order of increasing glass content, these chips are 2, 3, 1. 3 is a PDIP-40 Microchip PIC, 2 is a TI chip, and I forget what 1 was although I'm tempted to say it was made by ST. Chip 1 decapsulates very qiuckly, 2 is what I'd call normal or a little better, and 2 is difficult. You can see what 1 looks like being in acid for a few minutes, here's what 2 looks like after sitting for a while (with 1):

It forms a hard sediment at the bottom of the beaker instead of really breaking up. There's so much epoxy that acid soaking in actually become noticeable instead of the chip breaking up like usually happens.

1's surface looks like this under a microscope (maybe 10X):

Notice all the junk in there. It makes dissolving it much easier since not only do we not have to dissolve that stuff, but it falls out, pitting the surface and making it corrode faster.

Here's the side of 2:

Interesting that the top and bottom look a little different, I'm not sure if its related to it being hard to dissolve as I haven't got a chance to compare it to similar chips.

And here's 3 (maybe 10X):

You can see some particulate in there (maybe not as apparent in this lower res picture?) and its lighter in color than the 2.

To finish off a few things not related to epoxies but came up during this. This thin package had the die close enough to the surface that it actually managed to poke through long before the package was dissolved:

Also when you do a large lot you get a pile of that junk leftover:

I use to use HNO3 to get rid of it, but it tends to dillute quickly and dilute HNO3 attacks aluminium. Fortunately its unlikely that you'll need to decap any valuable chip in volume so you can safely just pick these out with tweezers for most chips. I'm afraid of the chips scratching but the passivation is pretty strong.

Until next time.

Monday, July 4, 2011

Preparing for live analysis

I have a probing station on the way and am going to try to start learning about live analysis because it will allow me to confirm my understanding of chips. That is I'll predict the state, apply voltage, and then check the levels on the chip. Later on it will of course be useful for numerous applications (probe lines, change signals, etc).

My very first attempt was using 70 % HNO3 on a hot chip a while ago (maybe a year?). Rinse, repeat until exposed. Unfortunately, this method is very slow and takes about an hour of work for a PDIP. Even if you don't care so much about your time, so many handling cycles is dangerous for the chip (thermal stress, break bond wires, dissolve Al, etc).

My next attempt which I only partially went through with was an attempt to automate the above a little. I Dremeled a hole in the bottom of some glassware and filled a beaker with a small amount of nitric. It was then RTV'd onto the surface of a chip and the whole assembly heated. Early results were promising, but its somewhat dangerous if you run out of RTV before the chip is done. Additionally, you can't see the chip's surface so cannot gauge progress easily. I was also left with some residue on the chip. I'm not sure if it was from improper cleaning or more than usually since it was all pooling into the acid cavity.

I didn't go back to this for a while mainly because without a probing station it wasn't a good use of time when I could just throw the chip in an acid bath with much less work. I've recently been moving onto more complicated scenarios and now consider live analysis to be a key objective for the reasons stated at the beginning. However, I value my time and don't want to spend a lot of time putting drops on a sample or dealing with RFNA for assorted reasons. Professional shops use decapsulation machines that squirt acid jets. Could I do something similar?

I started to think and I decided the first thing I should try is to see if I could build an automatic decapsultion with equipment I had. I had a rusted out peristaltic pump which seemed like an okay place to started. I gave up on the motor and rigged up a flexible shaft to it. See it in operation here.

This fed an H2SO4 from the bottom of a beaker which was to squirt a chip on a raised platform in the same beaker. I knew someone that said UL would approve anything, but probably not this. The machine was flat out hazardous for the following reasons:
-Exposed spinning shaft
-Pressurized heated acid on surplus / scavenged parts
-If the acid flow was uneven it could stress the glass and break it

It also suffered from some practical problems:
-I could not get the acid hot enough. I guess a combination that the glass insulated the chip too much and the acid cooled too much (although it was still pretty hot) before it reached the chip
-My platform was just an inverted beaker (inside a larger beaker) and the chip could easily get knocked off

I'm not a man easily defeated though. I spent some time to think about what I could do better. I got some PTFE beakers which I drilled out to make PTFE baskets so that I could take chips out of acid baths easier. Probably a short post on that at some point. It got me thinking: although I couldn't machine / shape glass very easily, I could easily machine PTFE. My goal was to make an assembly that would shield the chip from the acid except for a milled out impression where I wanted it to etch. I ordered some stock, ordered some PTFE bolts, and already had a PTFE sheet.

Before chips can actually be used, its a good idea to mill out a cavity (I used 3/16" (0.1875") TiN coated endmill for 0.3" pitch) so that the die will be reached much faster. Make it less wide than your chip so that you don't collapse the lead frame from excessive etching on the sides. Professional shops x-ray the chips, but you have a few options:
  • Rule of thumb: mill halfway to the top of the leads
  • Sacrifice one to find where the bond wires are. Might be worth it if you have a pile of them, although it still will probably only be an approximation
  • Don't mill it at all if its very thin. Certainly shouldn't be your first sample though as you'll have more issues with the lead frame collapsing
  • Some professional units short all of the pins and wait for continuity to the bond wires. I tried putting some water in the cavity to help detect when they were getting close, but could not do it reliably
In any case, inspect after milling. Try adding water to make bond wires more apparent.

When I was experimenting with the continuity method I got sloppy and didn't pay attention to depth. The endmill hit the Si. While not dangerous to the user, it did ruin the tip by dulling it and taking off the TiN coating (as a reference, silicon is 7.0 Mohs hardness but even titanium is only 6.5) Afterwords packages would heat up instead of being cool to the touch. I'm also told that excessive heat can burn the epoxy and make it difficult to dissolve although I haven't seen this yet. Healthy endmill that went too far:

Dull endmill that went too far:

You can see the wires are more smeared around than cut. For example, in the first image you can make out the bond wire arc cross section but they are more scattered around in the second. Both will still feel pretty sharp to the touch so I'd reccomend you start with a new endmill or be very careful. Another problem with using a dull endmill is that it requires more pressure to start a cut. You need to take off very thin slices and a dull endmill will tend to "bite" and then take a larger slice out. I had good luck using a sharp endmill in a breadboard. The breadboard / pin contact provided a small spring factor which allowed very slow, precise cuts. Additionally, I was able to place it back into position after inspecting depth.

I assembled my piece and lowered it into a beaker after milling a small cavity into the top of a PDIP. To avoid letting the epoxy slag accumulate in the cavity, I put it into the beaker upside-down. This aspect seemed to work rather well. Unfortunately, a few things didn't. First, I misunderestimated PTFE's thermal expansion coefficient. The block expanded a fair amount and wedged itself up against the glass. I feel lucky that it didn't break. Second, related to the first point, PTFE becomes soft at higher temperatures. This had two effects: making the gasket droop down, breaking the seal, and shearing the bolts from the added stress and being softer. Another thing that I did was to tack the chip in place with RTV. Initially I used just a small amount, but I tried to fill it in after seeing the gasket wasn't going to hold the chip in place. I then went so far as the flip the assembly over, but this caused the predicted slag problem. Although there was a clear separation between the epoxy that was still part of the package and that which wasn't, it couldn't be removed without destroying the fragile bond wires. Finally, the larger volume of RTV was significant since it got attacked more readily than the epoxy.

Although it didn't work out, it was a step in the right direction. The PTFE bolts were kinda expensive though so i was a little bummed about that. What if I just made a sold piece? I could prevent circulation at the top by not filling the bath above the top of the assembly. Acid would then stay stagnant in the middle and most likely cooler. And here it is:

After adding some handles to take it out of the acid easier:

The idea worked quite well. I was able to see though that the stirbar:

wasn't knocking the bubbles out like I thought it would. I solved this in two ways. First, I filled the cavity up more carefully from the side to let the acid flow in from the side rather than letting air bubbles get trapped if the acid was just poured in. Second, I drilled two holes on the side to let air escape. While it does increase package corrosion, it didn't see to signfigantly since the acid was mostly hitting the milled out portion.

The first chip gave very good results:

Die before any cleaning:

After pressurized water (although acetone would be preferred if I was serious about keeping it alive, water is hazardous to circuits. 98% H2SO4 is hygroscopic and should not contaminate the chip):

Ultrasound would probably clean it up nicely. Beware though if the connections are weak you may knock them off. Try shielding the ultrasound / user lower power or soaking first and see if it will take it off before running at full force. On a related note, be absolutely sure that you've cleaned the chip thoroughly to eliminate acid. Use acetone, 98 % rubbing alcohol, other other things that won't add water. Put it on gentle heat after to drive out any leftover moisture and store in a sealed container away from moisture and dust. Dissicant is probably a good idea. I had a chip I just flat didn't wash after doing this to, need to take a picture of it.

I need to also test to see how much this special jig helps vs just milling and dumping in the bath. Since the acid doesn't corrode the pins very fast, it may very well be sufficient to just dump it into the bath as is.

I'd like to eliminate the RTV completely by making some PTFE spacers. They'll go on the side of the chip so that it gets held in place better and allows less acid to flow to the top.

An associate has started to experiment with making microprobes which turns out to not be that hard. I think he told me its something along the lines of put tungsten rod in 20 % w/v KOH and run some current through it with a certain polarity. I was told the other parameters but just can't remember them off of the top of my head. Hopefully he'll do a writeup on at some point or I will once I have a need to start making probes. I recently received a probing station and so this might be in the near future.

To summarize, here are the key points
  • Your first chip should be easy to handle. Try a PDIP as they are heavy duty and have lots of room for error. Whatever you chose, you'll probably want to mill it and will want a way to securely hold it
  • Start by milling out a cavity. Use something narrower than your package to avoid collapsing the lead frame. Halfway from the top to the top of the lead frame is a good rule of thumb
  • Mount the chip upside down in a acid proof jig. I used PTFE, glass would also work fine.
  • I used a stir bar to increase circulation although I have yet to prove it actually helps
  • Make sure you can take the chip out of the bath to check on it. You can't under-etch and over-etching endangers the integrity of the chip. Be aware though that thermal shock can kill the chip
  • Avoid using water when rinsing. Use hygroscopic and / or water free solvents. Use ultrasound carefully as it may knock off bond wires.
  • Gently heat dry it when done to be sure you've driven out moisture
  • Store in a dry location. I use centrifuge tubes. Desiccant is probably a good idea although probably not needed

Sunday, July 3, 2011

A simple experiment with JSSim (visual6502)

The folks at have really done a great job on their project and I've been meaning to get more familiar with their work for a while. Now that I graduated I have more time for these projects and was able to dig in over the past week and especially yesterday. Bottom line: my experiment can be found at and a demo at This writeup is based on my git commit 6a613ee1131bbdec9a8bf4b6eeb02d13147842ab which was forked from mainline's de265ecdb89d8c5d299f09ad69aaf8b87b1aed5d. Changes are as noted, but most code snippets are copyright Brian Silverman, Barry Silverman, Ed Spittles, Achim Breidenbach, Ijor and maybe some others that I missed. See the github source for details.

I don't have much experience with JavaScript but I have enough experience with C like languages that it isn't really hard to use and just try to follow the syntax of things around me. I started by moving the 6502 into its own folder like later chips have been tending to so that I could focus on the relevant files easier. For those not familar with visual6502, here's a screenshot of the 6502 running in JSSim (Java Script Simulator):

Although its not obvious from the still picture, metal, poly, and diffusion are being colored according to their voltage potential. Wow! An outstanding way to learn about chips. However, the complexity of the simulator scared me from really trying to understand how it worked. Fortunately, most of the work is put into the data and the simulator core is easy to follow. In this post I'm going to step you through how visual6502 works and how to create a clocked inverter circuit using simple tools.

The first thing that you'll need is a reference diagram. I somewhat arbitrarily decided to try an NMOS inverter since I knew the 6502 was NMOS logic and could look at an example if I got stuck. An inverter just seemed like something I could easily clock with a single input. Lets start with a brief review of NMOS logic since these days its all about CMOS. In NMOS logic, we use a single transistor polarity and short out voltage through transistors to invert outputs. Here is an NMOS inverter from Wiki:
When A is 0 the switch is open and current can flow from VDD through R to OUT (A: 0, OUT: 1). If we put voltage on A (gate) the switch is closed and shorts out OUT (A: 1, OUT: 0) through the drain at top and source at bottom. NMOS was discontinued because CMOS didn't have the issue of needing to short out a resistor (power consumption when input is 1), eventually became faster (better noise margin), and also took up less chip space.

Converting this into a (simplified) layout:
I used gray for metal, black for contacts, red for poly, green for N diffusion, and white for P substrate. The blue lettering is an arbitrarily assigned net (electrically connected group) number which we'll use later as we convert this into a simulatable data file. I might use the term node and net interchangeably, they mean the same thing here. As a reminder, the distinction between source and drain is less important at a fundamental level. For our purposes we only care that the ends of the green blocks are the switch contacts and that the switch is controlled by the red part (polysilicon aka poly). Finally, assuming self aligned gates, the poly protects the silicon under the gate and so we only have diffusion around the poly and not under it. Early CMOS used metal gates but later switched to poly (not regular Si because you can't grow good crystals on an amorphous SiO2 glass surface).

Notice that we really try to avoid conventional resistors. While they can be made from strips of poly or diffusion, the easiest way is to make them out of transistors. I am not deeply familiar with this and initially had the drain and gate connected instead of the gate and source as above. So if you see images with them reversed its because I was too lazy to re-take screenshots after I fixed it. On my TODO list so that I can better recognize and understand them. The transistor below is more interesting and we'll mostly focus on it.

Pretty picture, but its also pretty lifeless. Time to start digging into the codebase. If you grab a copy of the visual6502 source code (either from my repo listed above or from the main repository at you should see chip-6800 subdirectory which defines the files you'll need to create for your own simulation:
  • nodenames.js: defines human friendly node names such as clk0
  • segdefs.js: defines how to draw the non-transistor parts and their connections
  • transdefs.js:transistor to net connections and transistor drawing
  • support.js: utilities and overrides to stub out unneeded functions
  • testprogram.js: CPU instructions. Since we won't have a CPU we don't need this file

nodenames.js contains the nodenames variable and looks something like:
var nodenames ={
gnd: 2,
vcc: 1,
clk0: 3,
vcc is net 1, gnd is net 2, and clk0 has been aliased to net 3.

segdefs.js contains the segdefs variable and looks something like:
var segdefs = [
[ 4,'-',5, 177,94, 193,95, 193,179, 178,180],
[ 1,'+',4, 128,214, 177,214, 177,265, 129,264],
[ 2,'-',3, 128,95, 179,94, 177,146, 128,146],
[ 4,'-',0, 66,163, 192,161, 193,179, 64,179],
Which probably looks pretty cryptic at first glance. The first element is the node number. The second is the pullup status: '+' for pullup and '-' (although I think any non-'+' value will work) for regular. That is a '+' indicates a resistor is connected to the positive supply and will turn on attached gates if not shorted out. Each pair thereafter forms part of the polygon used to draw the chip. All of the above are rough rectangles.

The next number is the layer number. This does not effect the simulation to my knowledge but we do want the visual aspect to work correctly. If you look in expertWires.js you should see:
var layernames = ['metal', 'switched diffusion', 'inputdiode', 'grounded diffusion', 'powered diffusion', 'polysilicon'];
var colors = ['rgba(128,128,192,0.4)','#FFFF00','#FF00FF','#4DFF4D',
var drawlayers = [true, true, true, true, true, true];
Which defines the layer numbers (0 indexed). Thus the sample data above defines the layers poly, powered diffusion, grounded diffusion, and metal. Switched diffusion is diffusion that will change state during simulation because its on a switched side of a transistor. In the sample image the two diffusion segments on the right are switched since they may or may not have a voltage potential on them depending on whether the transistor is on. The upper left diffusion is powered since it always has positive voltage and the lower left is grounded diffusion since its always at ground potential. Hopefully poly and metal are self explanatory.

We render in the order given, so make sure to place them in a good order. Make metal last as its semi-transparent and anything else will just cover it up. None of the other polygons (except transistors, but they aren't usually rendered) should overlap but if they do just arrange things as needed.

The final key file is transdefs.js which contains the transdefs variable:
var transdefs = [
The first element is the node name which is followed by the gate, first, and second net connections respectively. Like in the layout we don't distinguish between the gate and drain.

Now that we know what data we need we need to generate it. While I could learn to use or develop my own tools for converting layers to *.js files, I decided to go with the KISS strategy. I used the Kolourpaint toolchain to generate my *.js files:

I generated the points by hovering the mouse over the various coordinates and typing them into the *.js files. With both windows open at once it went pretty quick. If you're wondering why its upside-down, its because the simulator has the origin in the lower left hand corner and kolourpaint has it in the upper left hand corner. By flipping upside-down the coordinates come out correctly.

But its not over yet. I've glazed over utils.js but its actually necessary for this to work. The stock functions are more specialized for a full blown MCU, a 6502 in particular, and we will have to override these functions as appropriate. Finally, we need to set the canvase size by setting grChipSize which sets width and height. My images were 400 X 400 so I set grChipSize to 400. Lets step through initialization so that we know what we need to fixup.

We start in the main .html file by including a bunch of stuff. In particular you'll need to change the paths to reflect your files instead of the template's. For example, I used chip-6800 so had do substitute things like:
<s cript src="chip-6800/segdefs.js"></script>
<s cript src="chip-tutorial/inverter/segdefs.js"></script>
or whereever you put your files. Trusting the general structure and skipping over the HTML layout, the key item is
function handleOnload() {
Which launches setup() in expertWires.js after 200 milliseconds. The other key item in the main file is the play button:
<a href ="javascript:runChip()" id="start"><img class="navplay" src="images/play.png" title="run"></a>
which calls runChip(), but we won't worry about this for now.

This function is mostly just a bootstrap for the next stage. They do a lot of this and I'm not sure why they don't just make function calls.
EDIT: I've been told this is related to not letting scripts run too long and make the browser complain. By re-submitting the request the browser doesn't get so angry. They aren't sure if this is standard for web services but it seems to work.
Anyway, here it is:
function setup(){
statbox = document.getElementById('status');
setStatus('loading ...');
setTimeout(setup_part2, 0);
And this gives:
function setup_part2(){
frame = document.getElementById('frame');
statbox = document.getElementById('status');
setStatus('loading graphics...');
setTimeout(setup_part3, 0);
setupNodes() works on segdefs to setup the visual portion. For historical reasons (in a comment I read somewhere) it also contains the pullup status as noted earlier.

setupTransistors() does the actual transistor and net setup. One point of interest is that C1 will become "interesting" if C2 is but C1 isn't (ie GND and VCC will be moved to C2 if they weren't in transdefs.js). We also build a list of all of the transistors connected to each net. That way when we simulate an event we only have to reference the net instead of iterating through all of the other transistors looking for relevant gates by exchanging memory for CPU usage.

setupParams() parses query parameters (page.html?key=value) and so isn't important for basic usage. setupExpertMode() sets up the probe control panel and you don't really need to worry about it. Finally, detectOldBrowser() is compatibility related (makes rendering faster on certain systems?) and you also don't need to worry about it.

We now move onto setup_part3():
function setup_part3(){
setStatus('resetting ' + chipname + '...');
setTimeout(setup_part4, 0);
The chip layout should be visible and so we start to render the layout and move onto part 4:
function setup_part4(){
} else {
document.getElementById('stop').style.visibility = 'hidden';
Glaze over things and go to initChip() which is important since you'll need to define it. initChip() is responsible for setting the startup logic state. Unfortunately the default implemetnation in macros.js has statements like setHigh('rdy') which are 6502 specific. I cut that stuff out to give a very basic chip initialization instead. See my support.js, but basically it sets all transistors to off and then recalculates all transistors (recalcNodeList(allNodes())).

recalcNodeList() is a core interface. Its a discrete event simulator where we propagate switch information when things change. Since there's no guarantee it will settle, it will abort after 100 iterations if we did something dumb like create a ring oscillator by accident.

setup_part4 finished by calling go(). This will start looping the simulation. Usually this is by hitting the play button in main, but I hard coded the running variable to true so that I didn't have to hit the button. Also worth noting that I added a step delay variable (go_timeout). It may make sense for larger chips to run at full throttle, but for this simple simulation I limited at 1 Hz. step() will look for the net clk0 and invert its state. It also does a few other things so I added the following stubs:
Simple logic chip stubs
Print registers and such, we have none
Could use the input and output pins I guess if we really wanted to
Used extensivly in macros.js
function chipStatus(){}
//Simple logic chips have no bus to read/write, skip over
//Executed as part of clocking (halfStep()) in macros.js
//Alternativly we could have just re-implemented these functions
function handleBusRead() {}
function handleBusWrite() {}
//Stub implementation in case not using memtable.js
//No memory to track
function setupTable() {}
Whew! We should be ready to run. Check my data files for how I defined the*.js files. Allright, lets see what we get:

?!? Upon a little investigation, we see that there is a 400 pixel gutter. Since our image is 400 pixels, if we set grChipSize to 1200, we will see it centered at the bottom:

But really we want it to look nicer and so take of the 400 pixel left gutter:

Alternativly we could have made the transistors big enough so that the gutter doesn't matter. I added the variable main_area_left_gutter and set it to 0. I'm not clear why they added a gutter to the left but not bottom. In any case, lets see some clocking action! (The above image was taken before I added the clock) Clock on:

Clock off:

And it works! As you can see, the powered and grounded diffusion stay the same and the active diffusion area changes along with the metal. Not too much work overall even if you don't much about web technology.

Thanks to the visual6502 folks for providing such great software and making my inverter be more correct instead of "just work"! My next steps will be to start cross referencing the *.js against the die images and also to generate *.js automatically from layer images. On a final note, I've also learned a relatively simple technique for preparing ICs for live analysis that I'll hopefully make a post in the near future about.

Wednesday, March 2, 2011

Studying the CD4001

Somewhat arbitrarily I decided a CD4001 would be a good chip to really study to get a better feel for how a chip was put together. While I can recognize bits and pieces of larger chips, I still lack the fundamental understanding of how to recognize raw transistor arrangements. Although such basic logic chips have heavy optimizations which can be somewhat undesirable as a study tool, I'm hoping their simplicity makes up for it.

The original chip I was going to look at was a Fairchild CD4011:

I decapsulated it and found it had nice coloring:

Hopefully "POS" doesn't refer to their confidence in their design. This was only intended to be a preliminary quick photo before cleaning, but my metal tweezers slipped and sent it flying to who knows where. I now have plastic tweezers which tend to chip the dies less and less susceptible to slipping. Anyway, take a look at what I think is a National Semiconductor 4001 (was in a tank of 4001's):

In a similar area:

Maybe its just the "natural" arrangement for this sort of configuration? I'll figure out more as I etch out the transistors. Its interesting though that ones a CD4001 and the other is a CD4011.

Another item is interest is that older Texas Instruments datasheets had top metal included. Compare a datasheet with one of my snaps:

...and the (rough) stitch:

One interesting thing with the TI parts is that you can identify pin 1 with a bullet shaped pad. Other vendors have similar things and it seems the shapes tend to be unique per vendor. For example, it seems Motorola may use an octagon like pattern (all taken from what appear to be different revisions of the same 4001):

The first two are nearly identical. The last one has a full octagon where as the first two had a square corner.

I have some etching chemicals coming that will hopefully come by this weekend and I can use to expose some transistors. I have a roll of 100 Phillips 4011's (about $6 from Jameco):

which I'll practice on and then expose other chips that I have more limited capacities after I have some results. Since these chips are so simple, I can actually make guess as to what a lot of things do, but I would like the transistors as well to complete the picture. If successfull, I'd like to write up a tutorial that takes someone through decoding the chip.

On a random note, I get a lot of my chips by scrapping old electronics. I heat gun the board (wearing my 3M industrial respirator so as to not get too many fumes) and collect chips into a tray. Usually there are only a few I really care about, such as the main CPU or some FPGAs. There are lots of leftover small chips. Its not cost effective for me to use them in anything I design for a number of reasons. So, what to do with them? How about throw them in a beaker and decap in mass:

The larger chip is an i960 that I savagely ripped out of a computer that was being junked. As such, it got cracked in two spots. Setting up for mass photography:

They are on a microscope slide with sticky tape. I estimate I spent only about 1 min on each chip. Granted, this has limited usefulness, but it does show a number of interesting designs and I was never going to use the chips otherwise. For the curious, I uploaded a bunch of them to

Thursday, February 17, 2011

Scaling up image stitching

In summary, find the Python program I wrote here (works, but work in progress), you'll need the entire repo though:

Now that image capture is getting reasonably automated, stitching is the next bottleneck for mass scale IC -> netlist conversion. The Visual6502 team is working at scaling up their image -> netlist conversion. I recently got in contact with them and am hoping to try to get more involved. In the meantime, I suppose I'm a turbo nerd and just even looking over the layouts.

Knowing that Visual6502 had the best images, I managed to convince Christian Sattler to give me his stitch code and get it under an open source license which I somewhat arbitrarily called csstitch. You can now find it here along with some of my patches. Unfortunatly, I quickly realized that the high quality imagery from the confocal microscope had simplified a lot of the stitching. For example, no photometric optimization was being done and it was based off of autopano-sift-c (SIFT based), which I've always gotten far inferior results compared to (also SIFT based, which I call autopanoaj since that's the author's initials and autopano is too vague). From what I can tell, autopanoaj's secret may be that it has a very good outlier detection algorithm. If you turn it off, it produces many very poor control points (features). I've also been playing around with panomatic (SURF based). My general feel has been that the quality is less than autopano-sift-c, but I haven't had enough time yet to give it a fair trial.

Having this experience and getting some ideas from csstitch, I had dabbled at making my own higher performance stitching app. With the CNC producing very accurate XY coordinates, it seemed I could heavily optimize the control point finding process. Unfortunately, there turned out to be a bunch of gotchas along the way. Some of them are due to some oddities of the .pto format, some of them due to the fact that I run autopanoaj under WINE (yuck...) since I don't want to run Windows and the Linux version is out of date.

The first step is to arrange the images into a rectangle. Since the Python Image Library (PIL) and .pto like the origin at the upper left, this seemed the natural coordinate system. At first I tried lower left since that's what I was taught in math class, but quickly realized this was a bad idea and converted the code to use upper left hand coordinate system. I added a series of flip options so that as long as you started in some reasonable grid layout, you could flip it to the upper left hand corner convention. I also pre-process the images with something like "find '*.jpg' -exec convert -strip {} {} ';'" to get rid of accelerometer data and other stuff that I found over-smart programs used to mess things up. For example, gthumb will flip images based on this and made me arrange the images wrong. Anyway, start by getting them into some intuitive grid and then flip them as mentioned earlier:

And I had a picture demonstrating flips...but don't know where it is. In any case, these pictures are already in the correct order above, but are not named correctly for the column/row convention. I might allow parsing rows first to make the above arrangement possible. If you add a transpose, the image matrix is arranged correctly.

Next, it generates all of the adjacent image pairings (as a generator). The images are cut down to only stitch on a common overlap area. This cuts down processing time considerably, reduces false positives by limiting where matches can be placed. However, we've added some complexity with merging project files, discussed later. Image pairs look something like this:

A lot of the distortion you see I originally thought was due to camera-lens or other similar alignment. I eventually realized it had to do with the non-uniformity of my light source. It has a diffuser filter wheel which seems to have helped a lot. I also put it more off center which decreased intensity, but made the light more regular. In any case, it should be obvious from the above images that photometric optimization is a must for my images.

Next running autopanoaj under Linux required some magic. First, it doesn't behave well to a number of file related options, possible due to WINE imperfections. The only way for it to reliably work is to let it generate its own project file(s) by running it without any file options in the dir you have the images and want the project file(s) in. This requires post-processing to convert the WINE file paths to Linux file paths for the image names.

After that, projects are combined with pto_merge. While autopanoaj produced fairly lean projects, pto_merge seems to shove a bunch of junk in. This was creating some issues, so I decided to filter a lot of this out.

Finally, I do some post processing to get things closer to the final output. This includes changing the mapping to rectilinear and changing variables to only d (x) and e (y) optimization. Currently, stitching has to be finished in the GUI. This should be fixed if I can eliminate more control point gaps by image processing.

.pto documentation is surprisingly scarce among panotools. I don't know if I'm just not looking in the right place. I eventually realized that the suite that pto_merge is part of has some good documentation and was quite happy to find good documentation on the .pto format. Would be something good to add to the panotools wiki. I just requested mailing list membership and might bounce some of my ideas off of them.

One of the issues is that some of my images are of such poor quality that RANSAC / min result thresholding rejects the control points entirely. This is usually due to a blurry image. Example troublesome pair:

If I remove RANSAC, I can get it to generate a very poor match:

After a suggestion from someone, I played around with Kolourpaint image transforms and observed that softening the images (blurring sorta) causes the features to be uniform in both and can successfully generate accurate control points. Although the images look somewhat different, the control points are still in the same location on the original images. Example transformed images:

Wow! What an improvement. The new set was ran with RANSAC since it generated so much better data. I have yet to figureout how to implement an equivalent transform in Python, although I did some preliminary tests with ImageTransform.* and haven't tried very hard yet.

I've also been working on fully decoding a CD4011. Hopefully I'll have a write-up of that soon. In some ways, large CMOS designs are easier than small jobs because standard logic cells and other components have to be more regular for large designs to scale. That aside, the schematic is rather simple and other factors make the number of permutations fairly small. The main issue has actually been how to record the circuit nicely. My main choices so far seem to be the visual6502 Python GUI and Gimp. I've found GIMP not so user friendly, although I hear its not so bad once you get use to it. I'm not sure if any pictures of the layout editor have been published and I don't remember any restrictions not to publish them, so for those of you who have never seen the layer editor:

I now work for an aerospace company, Skybox Imaging where i'm now starting to learn about rad hard parts. Don't think I can get any from work, but if someone happened to have something, it might be fun to image and compare to other parts (maybe not be able to publish if ITAR issues though). Finally, someone suggested I submit something to kickstart, so I figured why not. Better toys, better research.

Kickstart rejected me, was worth a try. Looking back over tools, Degate is really what I should be using. I played with it a little and and if I really want simulatable results, I'll look into writing a plugin for it to export to visual6502 jsim format. Its too bad I lost the Fairchild CD4011 I had, it looked a lot more like textbook CMOS I had seen. I only got initial images from it and then lost it. Since then, I've gotten better (plastic now) tweezers that tend not to slip and launch things.

Thursday, January 27, 2011

Metalurgical microscope CNC

I now own what is possibly the worlds first combination CNC milling machine and metalurgical microscope.
I've realized among things, I just like looking at dies to admire the work put into them. Unfortunately, its a lot of work to take the many thousands of pictures required to get a good level of detail on even something like a 386. Plus, if you want the whole circuit, you need to repeat this for many layers.
Fortunately, I have some background in robotics and since I'm planning on getting a better microscope in the next few months, I don't feel bad being a little more aggressive with my current setup. A Unitron N, the model I have, is suppose to look something like this (image from
The z-axis gear got partially stripped at one point when I was trying to fit a shim as mine was missing. One thing in particular I hated though was the upside down sample mounting. I usually used post-its or similar to hold the dies to a drilled out petri dish. Which of course brings to the next annoyance that its also annoying to even get something mounted onto the stage at all.
Not too scared to be a little aggressive then at my half loved contraption, I got this after a few modifications:
See a crude video of it working here.
Some time ago I ditched the polaroid setup since I wasn't going to use that in any form. Next, I mounted the microscope upside down on some t-slot aluminium to make it much more convenient to view samples. Next, I wanted CNC control and I didn't really like the XYZ set-up anyway, so I replaced the XY with my Sherline 2000 CNC XY stage. Turns out, the CNC head can also still fit, but I didn't have it there during early testing.
The Z axis was a bit trickier. An earlier picture that shows the basic idea:
Also you can see I had to tape the eyepieces in so they wouldn't fall out. At first I tried to figure something out with my rotary table since it was the only other heavy duty CNC equipment I (thought I) had. I also had a Z stage for optical work, but the thumb screw was very hard to turn and adapting a servo would be difficult. The dimensions were also awkward to actuate it using the rotary table. I eventually realized I had a CNC micrometer from half of a UV-VIS spectrometer I found and scrapped at RPI. The brackets were close enough to easily adapter to the XY t-slot with an l-bracket. The sample tray base is a largish l-bracket which I've attached several different holders on to experiment. Ultimately I'll probably replace it was a kinematic mirror mount so I can correct tilt errors easily. An early test was to instead use a largish petri dish for the same purpose, but I found that Z axis movement tended to move things around too much. I should still try to couple it tighter to the main axis to reduce vibration, but it doesn't seem suitable enough unfortunately. Finally, the original set-up depended on gravity to remain stable. To compensate, I have it tightened with a rubber band:
The rubber band goes around the brass part which was suppose to be pressed against the shaft by the weight of the equipment mounted to it. As its been turned on its side, this is no longer true. At some point I might see if I can make some more proper spring loaded replacement.
One issue came up was that although you still can view through the eyepiece, its pretty awkward. With the camera over one and not wanting to re-adjust, it becomes difficult. So, I wanted to get the view onto a computer screen which is probably nicer on the eyes anyway. A 1/8" audio style jack breaks out composite where I convert it to an RCA type plug so it can go into my composite -> VGA converter box. The VGA then goes to an LCD display that was affixed to the t-slot. The second display behind the first, possibly not obvious in the above image, was arbitrarily fixed there to get a display up on a media server nearby and get the screen off of the floor.
The camera is mounted on t-slot aluminium as well. My Canon SD630 doesn't have a remove capture cord port and USB only supports PTP, so there is no built in way to do remote capture. So, I removed the top cover and soldered some wires onto the capture button. There are two spots: focus and snap. Shorting snap by itself is not enough to take a picture, focus must be depressed first. A DB25 breakout box runs to some optoisolators to short the signal. I figured out the correct polarity by using a volt meter on the leads coming from the camera button.
Electronics hardware is very simple. DB25 goes to a breakout board and then continues onto the stock Sherline driver box. I made a simple adapter to use the Vexta motor on the a axis with the Sherline box. The camera driver circuitry is very minimal:
The unused IC there is a CD4050 buffer I was going to use on the parallel port. I got lazy and didn't wire it up as the parallel port was already putting out near 5V.
Finally, there are several pieces to the software. At the core, I'm running EMC2. I set the step speeds and acceleration low so as to try to discourage the sample from vibrating. The camera is actuated from M7/M8 (coolant mist/flood) and then reset with M9 (coolant off). I use dwell instructions to give the camera enough time to take pictures, the necessary length of which I'm still working out.
The second part of the set-up is the software that generates the g-code. I wrote a Python program that you can find on my pr0ntools github repo. Its very crude currently, but may be sufficient. It assumes you are scanning a rectangle. One point is assumed to be the origin and the other is supplied on the command line. In order to make a plane, I assume the most level plane you could form from those. I'm currently always starting scans from one side on the theory it might make backlash issues less, but I'm not sure if it matters.
The test wafer I used looks like this:

An interesting piece in its own right, from what I can tell it came from an Intel Journey Inside the Computer educational kit. Of course, I didn't scan the entire wafer, but just one chip. While the plastic distorts the image, it does make a good test as its easy to level and rotate.
From a combination of my lenses are kinda dirty (probably can fix this or upgrade to modern optics, think it uses DIN components), not washing the wafer holder, and the plastic layer, the first pictures came out relatively nice. Why it may not produce quality images like the visual6502 team or Flylogic team does, it should serve to efficiently create a number of relatively high resolution shots to my hearts content. As I get a better microscope, I might also look into CNC retrofitting it, but more likely I'll focus on improving this one as better microscopes are current beyond my budget as a high risk project.

Wednesday, December 29, 2010

Berlin and CCC/Berlinsides

Just wanted to say I'll be in Berlin for a few days if anybody wants to say hi.
Lab is moving and I'm on the move, but I should also finally get settled and $$$ flowing in in the next few months, stand by for cool stuff.