Sunday, July 3, 2011

A simple experiment with JSSim (visual6502)

The folks at visual6502.org have really done a great job on their project and I've been meaning to get more familiar with their work for a while. Now that I graduated I have more time for these projects and was able to dig in over the past week and especially yesterday. Bottom line: my experiment can be found at http://johndmcmaster.github.com/visual6502/ and a demo at http://johndmcmaster.github.com/visual6502/tutorial-inverter.html This writeup is based on my git commit 6a613ee1131bbdec9a8bf4b6eeb02d13147842ab which was forked from mainline's de265ecdb89d8c5d299f09ad69aaf8b87b1aed5d. Changes are as noted, but most code snippets are copyright Brian Silverman, Barry Silverman, Ed Spittles, Achim Breidenbach, Ijor and maybe some others that I missed. See the github source for details.

I don't have much experience with JavaScript but I have enough experience with C like languages that it isn't really hard to use and just try to follow the syntax of things around me. I started by moving the 6502 into its own folder like later chips have been tending to so that I could focus on the relevant files easier. For those not familar with visual6502, here's a screenshot of the 6502 running in JSSim (Java Script Simulator):


Although its not obvious from the still picture, metal, poly, and diffusion are being colored according to their voltage potential. Wow! An outstanding way to learn about chips. However, the complexity of the simulator scared me from really trying to understand how it worked. Fortunately, most of the work is put into the data and the simulator core is easy to follow. In this post I'm going to step you through how visual6502 works and how to create a clocked inverter circuit using simple tools.

The first thing that you'll need is a reference diagram. I somewhat arbitrarily decided to try an NMOS inverter since I knew the 6502 was NMOS logic and could look at an example if I got stuck. An inverter just seemed like something I could easily clock with a single input. Lets start with a brief review of NMOS logic since these days its all about CMOS. In NMOS logic, we use a single transistor polarity and short out voltage through transistors to invert outputs. Here is an NMOS inverter from Wiki:
When A is 0 the switch is open and current can flow from VDD through R to OUT (A: 0, OUT: 1). If we put voltage on A (gate) the switch is closed and shorts out OUT (A: 1, OUT: 0) through the drain at top and source at bottom. NMOS was discontinued because CMOS didn't have the issue of needing to short out a resistor (power consumption when input is 1), eventually became faster (better noise margin), and also took up less chip space.

Converting this into a (simplified) layout:
I used gray for metal, black for contacts, red for poly, green for N diffusion, and white for P substrate. The blue lettering is an arbitrarily assigned net (electrically connected group) number which we'll use later as we convert this into a simulatable data file. I might use the term node and net interchangeably, they mean the same thing here. As a reminder, the distinction between source and drain is less important at a fundamental level. For our purposes we only care that the ends of the green blocks are the switch contacts and that the switch is controlled by the red part (polysilicon aka poly). Finally, assuming self aligned gates, the poly protects the silicon under the gate and so we only have diffusion around the poly and not under it. Early CMOS used metal gates but later switched to poly (not regular Si because you can't grow good crystals on an amorphous SiO2 glass surface).

Notice that we really try to avoid conventional resistors. While they can be made from strips of poly or diffusion, the easiest way is to make them out of transistors. I am not deeply familiar with this and initially had the drain and gate connected instead of the gate and source as above. So if you see images with them reversed its because I was too lazy to re-take screenshots after I fixed it. On my TODO list so that I can better recognize and understand them. The transistor below is more interesting and we'll mostly focus on it.

Pretty picture, but its also pretty lifeless. Time to start digging into the codebase. If you grab a copy of the visual6502 source code (either from my repo listed above or from the main repository at https://github.com/trebonian/visual6502) you should see chip-6800 subdirectory which defines the files you'll need to create for your own simulation:
  • nodenames.js: defines human friendly node names such as clk0
  • segdefs.js: defines how to draw the non-transistor parts and their connections
  • transdefs.js:transistor to net connections and transistor drawing
  • support.js: utilities and overrides to stub out unneeded functions
  • testprogram.js: CPU instructions. Since we won't have a CPU we don't need this file

nodenames.js contains the nodenames variable and looks something like:
var nodenames ={
gnd: 2,
vcc: 1,
clk0: 3,
}
vcc is net 1, gnd is net 2, and clk0 has been aliased to net 3.

segdefs.js contains the segdefs variable and looks something like:
var segdefs = [
[ 4,'-',5, 177,94, 193,95, 193,179, 178,180],
[ 1,'+',4, 128,214, 177,214, 177,265, 129,264],
[ 2,'-',3, 128,95, 179,94, 177,146, 128,146],
[ 4,'-',0, 66,163, 192,161, 193,179, 64,179],
]
Which probably looks pretty cryptic at first glance. The first element is the node number. The second is the pullup status: '+' for pullup and '-' (although I think any non-'+' value will work) for regular. That is a '+' indicates a resistor is connected to the positive supply and will turn on attached gates if not shorted out. Each pair thereafter forms part of the polygon used to draw the chip. All of the above are rough rectangles.

The next number is the layer number. This does not effect the simulation to my knowledge but we do want the visual aspect to work correctly. If you look in expertWires.js you should see:
var layernames = ['metal', 'switched diffusion', 'inputdiode', 'grounded diffusion', 'powered diffusion', 'polysilicon'];
var colors = ['rgba(128,128,192,0.4)','#FFFF00','#FF00FF','#4DFF4D',
'#FF4D4D','#801AC0','rgba(128,0,255,0.75)'];
var drawlayers = [true, true, true, true, true, true];
Which defines the layer numbers (0 indexed). Thus the sample data above defines the layers poly, powered diffusion, grounded diffusion, and metal. Switched diffusion is diffusion that will change state during simulation because its on a switched side of a transistor. In the sample image the two diffusion segments on the right are switched since they may or may not have a voltage potential on them depending on whether the transistor is on. The upper left diffusion is powered since it always has positive voltage and the lower left is grounded diffusion since its always at ground potential. Hopefully poly and metal are self explanatory.

We render in the order given, so make sure to place them in a good order. Make metal last as its semi-transparent and anything else will just cover it up. None of the other polygons (except transistors, but they aren't usually rendered) should overlap but if they do just arrange things as needed.

The final key file is transdefs.js which contains the transdefs variable:
var transdefs = [
['t1',4,2,3,[176,193,96,144],[415,415,11,5,4566],false],
['t2',1,1,3,[177,191,214,265],[415,415,11,5,4566],true],
]
The first element is the node name which is followed by the gate, first, and second net connections respectively. Like in the layout we don't distinguish between the gate and drain.

Now that we know what data we need we need to generate it. While I could learn to use or develop my own tools for converting layers to *.js files, I decided to go with the KISS strategy. I used the Kolourpaint toolchain to generate my *.js files:


I generated the points by hovering the mouse over the various coordinates and typing them into the *.js files. With both windows open at once it went pretty quick. If you're wondering why its upside-down, its because the simulator has the origin in the lower left hand corner and kolourpaint has it in the upper left hand corner. By flipping upside-down the coordinates come out correctly.

But its not over yet. I've glazed over utils.js but its actually necessary for this to work. The stock functions are more specialized for a full blown MCU, a 6502 in particular, and we will have to override these functions as appropriate. Finally, we need to set the canvase size by setting grChipSize which sets width and height. My images were 400 X 400 so I set grChipSize to 400. Lets step through initialization so that we know what we need to fixup.

We start in the main .html file by including a bunch of stuff. In particular you'll need to change the paths to reflect your files instead of the template's. For example, I used chip-6800 so had do substitute things like:
<s cript src="chip-6800/segdefs.js"></script>
for
<s cript src="chip-tutorial/inverter/segdefs.js"></script>
or whereever you put your files. Trusting the general structure and skipping over the HTML layout, the key item is
function handleOnload() {
...
setTimeout(setup,200);
...
}
Which launches setup() in expertWires.js after 200 milliseconds. The other key item in the main file is the play button:
<a href ="javascript:runChip()" id="start"><img class="navplay" src="images/play.png" title="run"></a>
which calls runChip(), but we won't worry about this for now.

This function is mostly just a bootstrap for the next stage. They do a lot of this and I'm not sure why they don't just make function calls.
EDIT: I've been told this is related to not letting scripts run too long and make the browser complain. By re-submitting the request the browser doesn't get so angry. They aren't sure if this is standard for web services but it seems to work.
Anyway, here it is:
function setup(){
statbox = document.getElementById('status');
setStatus('loading ...');
setTimeout(setup_part2, 0);
}
And this gives:
function setup_part2(){
frame = document.getElementById('frame');
statbox = document.getElementById('status');
setupNodes();
setupTransistors();
setupParams();
setupExpertMode();
detectOldBrowser();
setStatus('loading graphics...');
setTimeout(setup_part3, 0);
}
setupNodes() works on segdefs to setup the visual portion. For historical reasons (in a comment I read somewhere) it also contains the pullup status as noted earlier.

setupTransistors() does the actual transistor and net setup. One point of interest is that C1 will become "interesting" if C2 is but C1 isn't (ie GND and VCC will be moved to C2 if they weren't in transdefs.js). We also build a list of all of the transistors connected to each net. That way when we simulate an event we only have to reference the net instead of iterating through all of the other transistors looking for relevant gates by exchanging memory for CPU usage.

setupParams() parses query parameters (page.html?key=value) and so isn't important for basic usage. setupExpertMode() sets up the probe control panel and you don't really need to worry about it. Finally, detectOldBrowser() is compatibility related (makes rendering faster on certain systems?) and you also don't need to worry about it.

We now move onto setup_part3():
function setup_part3(){
if(chipLayoutIsVisible){
updateChipLayoutVisibility(true);
}
setStatus('resetting ' + chipname + '...');
setTimeout(setup_part4, 0);
}
The chip layout should be visible and so we start to render the layout and move onto part 4:
function setup_part4(){
setupTable();
setupNodeNameList();
logThese=signalSet(loglevel);
loadProgram();
setupConsole();
if(noSimulation){
stopChip();
running=undefined;
setStatus('Ready!');
} else {
initChip();
document.getElementById('stop').style.visibility = 'hidden';
go();
}
}
Glaze over things and go to initChip() which is important since you'll need to define it. initChip() is responsible for setting the startup logic state. Unfortunately the default implemetnation in macros.js has statements like setHigh('rdy') which are 6502 specific. I cut that stuff out to give a very basic chip initialization instead. See my support.js, but basically it sets all transistors to off and then recalculates all transistors (recalcNodeList(allNodes())).

recalcNodeList() is a core interface. Its a discrete event simulator where we propagate switch information when things change. Since there's no guarantee it will settle, it will abort after 100 iterations if we did something dumb like create a ring oscillator by accident.

setup_part4 finished by calling go(). This will start looping the simulation. Usually this is by hitting the play button in main, but I hard coded the running variable to true so that I didn't have to hit the button. Also worth noting that I added a step delay variable (go_timeout). It may make sense for larger chips to run at full throttle, but for this simple simulation I limited at 1 Hz. step() will look for the net clk0 and invert its state. It also does a few other things so I added the following stubs:
/*
Simple logic chip stubs
*/
/*
Print registers and such, we have none
Could use the input and output pins I guess if we really wanted to
Used extensivly in macros.js
*/
function chipStatus(){}
//Simple logic chips have no bus to read/write, skip over
//Executed as part of clocking (halfStep()) in macros.js
//Alternativly we could have just re-implemented these functions
function handleBusRead() {}
function handleBusWrite() {}
//Stub implementation in case not using memtable.js
//No memory to track
function setupTable() {}
Whew! We should be ready to run. Check my data files for how I defined the*.js files. Allright, lets see what we get:




?!? Upon a little investigation, we see that there is a 400 pixel gutter. Since our image is 400 pixels, if we set grChipSize to 1200, we will see it centered at the bottom:


But really we want it to look nicer and so take of the 400 pixel left gutter:


Alternativly we could have made the transistors big enough so that the gutter doesn't matter. I added the variable main_area_left_gutter and set it to 0. I'm not clear why they added a gutter to the left but not bottom. In any case, lets see some clocking action! (The above image was taken before I added the clock) Clock on:


Clock off:


And it works! As you can see, the powered and grounded diffusion stay the same and the active diffusion area changes along with the metal. Not too much work overall even if you don't much about web technology.

Thanks to the visual6502 folks for providing such great software and making my inverter be more correct instead of "just work"! My next steps will be to start cross referencing the *.js against the die images and also to generate *.js automatically from layer images. On a final note, I've also learned a relatively simple technique for preparing ICs for live analysis that I'll hopefully make a post in the near future about.

Wednesday, March 2, 2011

Studying the CD4001

Somewhat arbitrarily I decided a CD4001 would be a good chip to really study to get a better feel for how a chip was put together. While I can recognize bits and pieces of larger chips, I still lack the fundamental understanding of how to recognize raw transistor arrangements. Although such basic logic chips have heavy optimizations which can be somewhat undesirable as a study tool, I'm hoping their simplicity makes up for it.

The original chip I was going to look at was a Fairchild CD4011:


I decapsulated it and found it had nice coloring:


Hopefully "POS" doesn't refer to their confidence in their design. This was only intended to be a preliminary quick photo before cleaning, but my metal tweezers slipped and sent it flying to who knows where. I now have plastic tweezers which tend to chip the dies less and less susceptible to slipping. Anyway, take a look at what I think is a National Semiconductor 4001 (was in a tank of 4001's):


In a similar area:


Maybe its just the "natural" arrangement for this sort of configuration? I'll figure out more as I etch out the transistors. Its interesting though that ones a CD4001 and the other is a CD4011.

Another item is interest is that older Texas Instruments datasheets had top metal included. Compare a datasheet with one of my snaps:


...and the (rough) stitch:


One interesting thing with the TI parts is that you can identify pin 1 with a bullet shaped pad. Other vendors have similar things and it seems the shapes tend to be unique per vendor. For example, it seems Motorola may use an octagon like pattern (all taken from what appear to be different revisions of the same 4001):




The first two are nearly identical. The last one has a full octagon where as the first two had a square corner.

I have some etching chemicals coming that will hopefully come by this weekend and I can use to expose some transistors. I have a roll of 100 Phillips 4011's (about $6 from Jameco):


which I'll practice on and then expose other chips that I have more limited capacities after I have some results. Since these chips are so simple, I can actually make guess as to what a lot of things do, but I would like the transistors as well to complete the picture. If successfull, I'd like to write up a tutorial that takes someone through decoding the chip.

On a random note, I get a lot of my chips by scrapping old electronics. I heat gun the board (wearing my 3M industrial respirator so as to not get too many fumes) and collect chips into a tray. Usually there are only a few I really care about, such as the main CPU or some FPGAs. There are lots of leftover small chips. Its not cost effective for me to use them in anything I design for a number of reasons. So, what to do with them? How about throw them in a beaker and decap in mass:


The larger chip is an i960 that I savagely ripped out of a computer that was being junked. As such, it got cracked in two spots. Setting up for mass photography:


They are on a microscope slide with sticky tape. I estimate I spent only about 1 min on each chip. Granted, this has limited usefulness, but it does show a number of interesting designs and I was never going to use the chips otherwise. For the curious, I uploaded a bunch of them to http://intruded.net:8080/uv/UVSG/

Thursday, February 17, 2011

Scaling up image stitching

In summary, find the Python program I wrote here (works, but work in progress), you'll need the entire repo though:
https://github.com/JohnDMcMaster/pr0ntools/tree/master/stitch

Now that image capture is getting reasonably automated, stitching is the next bottleneck for mass scale IC -> netlist conversion. The Visual6502 team is working at scaling up their image -> netlist conversion. I recently got in contact with them and am hoping to try to get more involved. In the meantime, I suppose I'm a turbo nerd and just even looking over the layouts.

Knowing that Visual6502 had the best images, I managed to convince Christian Sattler to give me his stitch code and get it under an open source license which I somewhat arbitrarily called csstitch. You can now find it here along with some of my patches. Unfortunatly, I quickly realized that the high quality imagery from the confocal microscope had simplified a lot of the stitching. For example, no photometric optimization was being done and it was based off of autopano-sift-c (SIFT based), which I've always gotten far inferior results compared to autopano.kolor.com (also SIFT based, which I call autopanoaj since that's the author's initials and autopano is too vague). From what I can tell, autopanoaj's secret may be that it has a very good outlier detection algorithm. If you turn it off, it produces many very poor control points (features). I've also been playing around with panomatic (SURF based). My general feel has been that the quality is less than autopano-sift-c, but I haven't had enough time yet to give it a fair trial.

Having this experience and getting some ideas from csstitch, I had dabbled at making my own higher performance stitching app. With the CNC producing very accurate XY coordinates, it seemed I could heavily optimize the control point finding process. Unfortunately, there turned out to be a bunch of gotchas along the way. Some of them are due to some oddities of the .pto format, some of them due to the fact that I run autopanoaj under WINE (yuck...) since I don't want to run Windows and the Linux version is out of date.

The first step is to arrange the images into a rectangle. Since the Python Image Library (PIL) and .pto like the origin at the upper left, this seemed the natural coordinate system. At first I tried lower left since that's what I was taught in math class, but quickly realized this was a bad idea and converted the code to use upper left hand coordinate system. I added a series of flip options so that as long as you started in some reasonable grid layout, you could flip it to the upper left hand corner convention. I also pre-process the images with something like "find '*.jpg' -exec convert -strip {} {} ';'" to get rid of accelerometer data and other stuff that I found over-smart programs used to mess things up. For example, gthumb will flip images based on this and made me arrange the images wrong. Anyway, start by getting them into some intuitive grid and then flip them as mentioned earlier:

And I had a picture demonstrating flips...but don't know where it is. In any case, these pictures are already in the correct order above, but are not named correctly for the column/row convention. I might allow parsing rows first to make the above arrangement possible. If you add a transpose, the image matrix is arranged correctly.

Next, it generates all of the adjacent image pairings (as a generator). The images are cut down to only stitch on a common overlap area. This cuts down processing time considerably, reduces false positives by limiting where matches can be placed. However, we've added some complexity with merging project files, discussed later. Image pairs look something like this:




A lot of the distortion you see I originally thought was due to camera-lens or other similar alignment. I eventually realized it had to do with the non-uniformity of my light source. It has a diffuser filter wheel which seems to have helped a lot. I also put it more off center which decreased intensity, but made the light more regular. In any case, it should be obvious from the above images that photometric optimization is a must for my images.

Next running autopanoaj under Linux required some magic. First, it doesn't behave well to a number of file related options, possible due to WINE imperfections. The only way for it to reliably work is to let it generate its own project file(s) by running it without any file options in the dir you have the images and want the project file(s) in. This requires post-processing to convert the WINE file paths to Linux file paths for the image names.

After that, projects are combined with pto_merge. While autopanoaj produced fairly lean projects, pto_merge seems to shove a bunch of junk in. This was creating some issues, so I decided to filter a lot of this out.

Finally, I do some post processing to get things closer to the final output. This includes changing the mapping to rectilinear and changing variables to only d (x) and e (y) optimization. Currently, stitching has to be finished in the GUI. This should be fixed if I can eliminate more control point gaps by image processing.

.pto documentation is surprisingly scarce among panotools. I don't know if I'm just not looking in the right place. I eventually realized that the suite that pto_merge is part of has some good documentation and was quite happy to find good documentation on the .pto format. Would be something good to add to the panotools wiki. I just requested mailing list membership and might bounce some of my ideas off of them.

One of the issues is that some of my images are of such poor quality that RANSAC / min result thresholding rejects the control points entirely. This is usually due to a blurry image. Example troublesome pair:



If I remove RANSAC, I can get it to generate a very poor match:


After a suggestion from someone, I played around with Kolourpaint image transforms and observed that softening the images (blurring sorta) causes the features to be uniform in both and can successfully generate accurate control points. Although the images look somewhat different, the control points are still in the same location on the original images. Example transformed images:



Wow! What an improvement. The new set was ran with RANSAC since it generated so much better data. I have yet to figureout how to implement an equivalent transform in Python, although I did some preliminary tests with ImageTransform.* and haven't tried very hard yet.

I've also been working on fully decoding a CD4011. Hopefully I'll have a write-up of that soon. In some ways, large CMOS designs are easier than small jobs because standard logic cells and other components have to be more regular for large designs to scale. That aside, the schematic is rather simple and other factors make the number of permutations fairly small. The main issue has actually been how to record the circuit nicely. My main choices so far seem to be the visual6502 Python GUI and Gimp. I've found GIMP not so user friendly, although I hear its not so bad once you get use to it. I'm not sure if any pictures of the layout editor have been published and I don't remember any restrictions not to publish them, so for those of you who have never seen the layer editor:


I now work for an aerospace company, Skybox Imaging where i'm now starting to learn about rad hard parts. Don't think I can get any from work, but if someone happened to have something, it might be fun to image and compare to other parts (maybe not be able to publish if ITAR issues though). Finally, someone suggested I submit something to kickstart, so I figured why not. Better toys, better research.

EDIT:
Kickstart rejected me, was worth a try. Looking back over tools, Degate is really what I should be using. I played with it a little and and if I really want simulatable results, I'll look into writing a plugin for it to export to visual6502 jsim format. Its too bad I lost the Fairchild CD4011 I had, it looked a lot more like textbook CMOS I had seen. I only got initial images from it and then lost it. Since then, I've gotten better (plastic now) tweezers that tend not to slip and launch things.

Thursday, January 27, 2011

Metalurgical microscope CNC

I now own what is possibly the worlds first combination CNC milling machine and metalurgical microscope.
I've realized among things, I just like looking at dies to admire the work put into them. Unfortunately, its a lot of work to take the many thousands of pictures required to get a good level of detail on even something like a 386. Plus, if you want the whole circuit, you need to repeat this for many layers.
Fortunately, I have some background in robotics and since I'm planning on getting a better microscope in the next few months, I don't feel bad being a little more aggressive with my current setup. A Unitron N, the model I have, is suppose to look something like this (image from http://microscopesonline.info/):
The z-axis gear got partially stripped at one point when I was trying to fit a shim as mine was missing. One thing in particular I hated though was the upside down sample mounting. I usually used post-its or similar to hold the dies to a drilled out petri dish. Which of course brings to the next annoyance that its also annoying to even get something mounted onto the stage at all.
Not too scared to be a little aggressive then at my half loved contraption, I got this after a few modifications:
See a crude video of it working here.
Some time ago I ditched the polaroid setup since I wasn't going to use that in any form. Next, I mounted the microscope upside down on some t-slot aluminium to make it much more convenient to view samples. Next, I wanted CNC control and I didn't really like the XYZ set-up anyway, so I replaced the XY with my Sherline 2000 CNC XY stage. Turns out, the CNC head can also still fit, but I didn't have it there during early testing.
The Z axis was a bit trickier. An earlier picture that shows the basic idea:
Also you can see I had to tape the eyepieces in so they wouldn't fall out. At first I tried to figure something out with my rotary table since it was the only other heavy duty CNC equipment I (thought I) had. I also had a Z stage for optical work, but the thumb screw was very hard to turn and adapting a servo would be difficult. The dimensions were also awkward to actuate it using the rotary table. I eventually realized I had a CNC micrometer from half of a UV-VIS spectrometer I found and scrapped at RPI. The brackets were close enough to easily adapter to the XY t-slot with an l-bracket. The sample tray base is a largish l-bracket which I've attached several different holders on to experiment. Ultimately I'll probably replace it was a kinematic mirror mount so I can correct tilt errors easily. An early test was to instead use a largish petri dish for the same purpose, but I found that Z axis movement tended to move things around too much. I should still try to couple it tighter to the main axis to reduce vibration, but it doesn't seem suitable enough unfortunately. Finally, the original set-up depended on gravity to remain stable. To compensate, I have it tightened with a rubber band:
The rubber band goes around the brass part which was suppose to be pressed against the shaft by the weight of the equipment mounted to it. As its been turned on its side, this is no longer true. At some point I might see if I can make some more proper spring loaded replacement.
One issue came up was that although you still can view through the eyepiece, its pretty awkward. With the camera over one and not wanting to re-adjust, it becomes difficult. So, I wanted to get the view onto a computer screen which is probably nicer on the eyes anyway. A 1/8" audio style jack breaks out composite where I convert it to an RCA type plug so it can go into my composite -> VGA converter box. The VGA then goes to an LCD display that was affixed to the t-slot. The second display behind the first, possibly not obvious in the above image, was arbitrarily fixed there to get a display up on a media server nearby and get the screen off of the floor.
The camera is mounted on t-slot aluminium as well. My Canon SD630 doesn't have a remove capture cord port and USB only supports PTP, so there is no built in way to do remote capture. So, I removed the top cover and soldered some wires onto the capture button. There are two spots: focus and snap. Shorting snap by itself is not enough to take a picture, focus must be depressed first. A DB25 breakout box runs to some optoisolators to short the signal. I figured out the correct polarity by using a volt meter on the leads coming from the camera button.
Electronics hardware is very simple. DB25 goes to a breakout board and then continues onto the stock Sherline driver box. I made a simple adapter to use the Vexta motor on the a axis with the Sherline box. The camera driver circuitry is very minimal:
The unused IC there is a CD4050 buffer I was going to use on the parallel port. I got lazy and didn't wire it up as the parallel port was already putting out near 5V.
Finally, there are several pieces to the software. At the core, I'm running EMC2. I set the step speeds and acceleration low so as to try to discourage the sample from vibrating. The camera is actuated from M7/M8 (coolant mist/flood) and then reset with M9 (coolant off). I use dwell instructions to give the camera enough time to take pictures, the necessary length of which I'm still working out.
The second part of the set-up is the software that generates the g-code. I wrote a Python program that you can find on my pr0ntools github repo. Its very crude currently, but may be sufficient. It assumes you are scanning a rectangle. One point is assumed to be the origin and the other is supplied on the command line. In order to make a plane, I assume the most level plane you could form from those. I'm currently always starting scans from one side on the theory it might make backlash issues less, but I'm not sure if it matters.
The test wafer I used looks like this:

An interesting piece in its own right, from what I can tell it came from an Intel Journey Inside the Computer educational kit. Of course, I didn't scan the entire wafer, but just one chip. While the plastic distorts the image, it does make a good test as its easy to level and rotate.
From a combination of my lenses are kinda dirty (probably can fix this or upgrade to modern optics, think it uses DIN components), not washing the wafer holder, and the plastic layer, the first pictures came out relatively nice. Why it may not produce quality images like the visual6502 team or Flylogic team does, it should serve to efficiently create a number of relatively high resolution shots to my hearts content. As I get a better microscope, I might also look into CNC retrofitting it, but more likely I'll focus on improving this one as better microscopes are current beyond my budget as a high risk project.

Wednesday, December 29, 2010

Berlin and CCC/Berlinsides

Just wanted to say I'll be in Berlin for a few days if anybody wants to say hi.
Lab is moving and I'm on the move, but I should also finally get settled and $$$ flowing in in the next few months, stand by for cool stuff.

Tuesday, November 30, 2010

Sulphuric acid decapsulation

Something I've been meaning to try for some time. Somewhat arbitrarily I decided to go analog, the victims were a 741 op amp and a 555 timer. The torture:
The victim (I snapped the pins off since they are easy to remove and had more of an impact of nitric reactions):
Initial setup:
I didn't take a picture of this, but the solution started to turn brown and diffuse out around the IC before too long. Began to turn darker:
And eventually black:
The acid behaved differently in the 555 flask (misty, no creeping along sides in 555 vs creeping and no mist):
After draining and some initial acetone rinse:
The die is seen in the leftmost object. I've been told that sulphuric is useful for live decapsulation and it certainly shows here. Much of the "wiring" was preserved despite prolonged exposure to acid. Nitric on the other hand would have obliterated these. Not as visible, but also all of the bond wires were preserved.
Since they were both analog ICs made by ST, it less likely that they were different epoxies. Both used fresh acid. Probably due to some contamination in the flasks.
I'll try to update with some IC pictures. Nitric tends to leave a lot of residue. This on the other hand had overall clean dies, although one of them had sort of a grainy appearance, maybe from certain residues? Apparently the 555 didn't have a passivation layer and the 741 did which resulted in scratches on the 555 after I was careless during plucking and didn't realize it wasn't protected.

In summary, this is what I thought
Advantages
-Less fumes than nitric acid, MIGHT be safer with less equipment / ventilation. With the cover on my glassware and the top being somewhat cold from the environmental temperature, it seemed to reflux the acid and I didn't even really notice the fumes. Contrast with nitric where fumes are an inherent problem from the nitrate decomposition.
-Inexpensive
-Readily availible materials? Battery acid and drain cleaner are readily availible. Battery acid tends to be purer and would likely need to be distilled first, but drain cleaner (ex: Bull Dozer) is much stronger but with contaminants. In any case, generally not a controlled substance and one should be able to order it without too much trouble.
Disadvantages
-Higher working temperature. Might literally take your hand fall off if you spilled on it. When I was younger a single drop of cold concentrated sulphuric landed on my hand and caused a severe burn to which I'm reminded to this day by a scar. I can't imagine what a broken boiling beaker could do.
-From the solution turning black and the lack of bubbles, no clear indication of when its "done." Combined with the larger cool down time of the acid and glassware, this can make it inefficient for doing small batches.
Overall, probably a good compromise for those that want to try some of this stuff
-Grainy appearance on dies? Need to look more into what that came from

With this in mind, one good application might be to use it as a wash after nitric. Since I've found issues with particulate residues after nitric, a brief sulphuric bath might be able to clear them off. I think sulphuric works at lower (ie room) temperatures, albeit much slower, so it might not even require heating. I'll probably try to soak a fully encapsulated IC overnight and see how it goes as a starter.

Monday, November 8, 2010

Back to Troy, NY

After spending the summer in Cambridge, MA and back to SF Bay area for a few weeks, I've been back to Troy, NY. What makes Troy special? I'll tell you...
Luxurious homes
Expensive cars
Booming industry
And fine art
Okay, so its not quite as bad as I make it look, but most of these were taken pretty close to my apartment. To be fair, they've been knocking down a lot of the old buildings and graffiti is pretty rare except when this construction wall went up at RPI and people went nuts. I'll omit those pictures as if the first pictures don't get me hate mail, I get the feeling RPI might give me a "strong suggestion" to take down the latter.
Now that the small talk is out of the way, on to business. Although I haven't been posting anything, a lot has been happening. First, the microscope I previously mentioned never came, but eBay refunded me. However, my room-mate bought a metallurgical microscope with USB camera, so I'm better off than ever. Being off campus now, I also have less restrictions and don't have to deal with RA BS and such. One perk of my apartment is that I got some lab space in an area that's being remodelled. Its going to go away in January, but I'm hopefully moving out then anyway, so that shouldn't really effect me. The end effect of this is that I'm finally getting a chance to do all of the stuff I wanted to before and actually have some time and space to try things out.
I've imaged a bunch more IC pictures. In particular, I have images of discrete transistors, fully delayered 7404 hex-inverter, and other ICs.
3906 top metal
Old TMS320 logo section
On that note, a die image archive was started at http://intruded.net:8080/wiki/ Since I like Wiki's, I got myself an account and you should expect to see any die images I publicly release to appear there. I posted a few from a bit back, but haven't gone on a rampage yet. One of the things they are working on is getting a "Google Maps" style IC viewer for larger ICs. A crude test page is at http://intruded.net:8080/map/ (you'll have to zoom to correct level).
Map view test for large ICs
Regarding http://siliconpr0n.wikispaces.com/, I recently got permission from Sergei P. Skorobogatov to include images from his Semi Invasive Attacks paper on the Wiki as long as they are credited to him. So, along with the other material I've been accumulating from my own research, expect some rapid expansion on the Wiki in the near future.
After delayering a few 7400 series ICs, I've realized I had in fact been at the transistor layer before, but just didn't understand what I was looking at. Probably bad been confused by all of the MOS pictures I had seen? In any case, I tried a 74163, but found it was too complex to start with. I could only recognize a handful of components. A few days ago I delayered a 7404 which should provide a much cleaner reference circuit since its small and more or less split into 6 regular units. Unfortunately, I let it sit for a while without agitating it, so it crystallized a bit, but should be fine for my purposes.
7404 transistors
A quick overview of the techniques I currently use and why. Most ICs are in epoxy. I boil them in 70% nitric until the epoxy is removed. Lacking an ultrsonic cleaner, I wash them in room temp 3% HF for about a minute to clean the surface. This takes a thin layer off the top, which removes most debris. Then, depending on how patient I'm feeling, either room temperature or near boiling 3% HF to delayer the IC. If I want to keep it suitable for live analysis (mostly my roommate has been doing this), a Dremel "drill press" with a small endmill is used to make a cavity above the die. We use a rough estimate, usually slightly above the pins, to guess how far to go down. The package is pre-heated to 300F and a drop is put on top, allowed to etch, and washed with acetone before it dries out. Heating the acid doesn't seem to make a difference as its heat is negligible (plus transfer cool off) compared to many IC packages. I also played around briefly with another low cost method that is more automatic but less selective, I'll try to post something on that soon.
Finally, I'm interviewing with various companies and looking for a job, so if you think you might be interested in me, feel free to send me an e-mail at JohnDMcMaster gmail.com.