Total: 0,00 €



  • Thursday, July 25, 2013 - 18:04
    ShopBot Desktop as a 3D Printer for Sugar Glass

    Hi, So for Bioengineering research I've been looking for a way to improve precision and reproducibility of each print with a bit less consideration on cost (this is for working with human cells eventually, safety and sterility are more important than cost). The main constraint is I need it to be able to print sugar glass and use my BariCUDA extruder, which means the extruder mount needs to support a few pounds without having any problems.

    Kliment in #reprap suggested I modify a ShopBot Desktop, and so that's exactly what we did. With awesome help from Gordon at ShopBot and Johnny at Ultimachine, we were able to get things going. Also, MAJOR props to Erik Zalm who maintains Marlin firmware for helping us get everything going. NOTE: BARICUDA is now a #define in Marlin so you can turn on/off sugar printing functionality on your RAMBo with a simple switch. Thanks again Erik!

    We took out the brains of the ShopBot, left the gecko stepper drivers, replaced the brains with a RAMBo board from Ultimachine. We used the motor ext pins on the RAMBo board that we then sent the step and direction pulses to, and fed them directly into the stepper drivers using a modified 37 pin connector. My modified Marlin Firmware is available here: https://github.com/jmil/Marlin Here's the setup and some more details in the first video:

    The ShopBot is all acme rod for movement, and it can drive the motors very fast because the large motors (NEMA 34?) are held at 48 V. So you don't lose steps. It's still open loop motion control, but it has been awesome. This RepStrap has been fantastic for sugar printing and it is being used every day in the lab at UPenn.

    Now that I am setting up a new lab at Rice University in Houston TX, I am very excited to get another one! Forward SCIENCE! Did I mention you should contact me if you want to come do a Sabbatical? We need more specialized repraps for Bioengineering. More on that next month. :D

    Here's the final video printing sugar glass on a ShopBot Desktop:

  • Wednesday, July 24, 2013 - 18:32
    3D printers shown to emit potentially harmful nanosized particles

    I am reposting this out of physorg.com.  It appears that we have might have a problem not so much with the outgassing from our printers but from nanoparticles produced by our extruders during the printing process.  Those of us who aren't already making arrangements for ventilation should possibly consider doing so.

    3D printers shown to emit potentially harmful nanosized particles
  • Sunday, July 21, 2013 - 15:58
    Sean Moss-Pultz: This isn’t my fight… but

    Ever since OK Computer, I’ve loved Radiohead. I so admire how they’ve traveled their own path; and done so with immense commercial successful – selling over 30 million albums. I remember staying up late to support their pay-what-you-want release of “In Rainbows”. And then being inspired as hell when I learned they shot “House of Cards” (2008) using not cameras, but lasers. (The visualization was done using Processing. They even open sourced the data on Google Code!)

    This past week Nigel Godrich, their longtime engineer / producer / musician, went after Spotify:

    Streaming is obviously the music distribution model moving forward. I listen to Spotify. I think it’s an amazing product; but I totally agree with Nigel here, that doesn’t make it right for the channel to commodify artists to keep their share prices up.

    Something’s got to change. Our industry (tech) is terrible at this sort of thing (music, apps, newspapers, …). I can’t tell you how many times people have told me, “Content is king.” You know what? It’s total bullshit. It’s ludicrous to pretend that ones and zeros are all created equal. Kill-off the ability of the creatives to make a living, and we’ll see how that “content” sounds.

    I’m with Radiohead on this one. We need a rebellion.

  • Saturday, July 13, 2013 - 10:58
    Chris Lord: Getting healthy

    I’ve never really considered myself an unhealthy person. I exercise quite regularly and keep up with a reasonable amount of active hobbies (climbing, squash, tennis). That’s not really lapsed much, except for the time the London Mozilla office wasn’t ready and I worked at home – I think I climbed less during that period. Apparently though, that isn’t enough… After EdgeConf, I noticed in the recording of the session I participated in that I was looking a bit more plump than the mental image I had of myself. I weighed myself, and came to the shocking realisation that I was almost 14 stone (89kg). This put me well into the ‘overweight’ category, and was at least a stone heavier than I thought I was.

    I’d long been considering changing my diet. I found Paul Rouget’s post particularly inspiring, and discussing diet with colleagues at various work-weeks had put ideas in my head. You could say that I was somewhat of a diet sceptic; I’d always thought that exercise was the key to maintaining a particular weight, especially cardiovascular exercise, and that with an active lifestyle you could get away with eating what you like. I’ve discovered that, for the most part, this was just plain wrong.

    Before I go into the details of what I’ve done over the past 5 months, let me present some data:

  • Saturday, July 13, 2013 - 09:34
    NC393 development progress

    Development of the NC393 is now started, at last – last 6 weeks I’m working on it full time. It is still a long way ahead before the new camera will replace our current model 353, but at least the very first step is completed – I just finished the PCB layout of the system board.

    10353 System Board PCB layout

    10393 System Board PCB layout

    There were not so many changes to the specs/features that were planned and described in the October 2012 post, the camera will be powered by Xilinx Zynq SoC (XC7Z030-1FBG484C to be exact) that combines high performance FPGA with a dual ARM CPU and generous set of built-in peripherals. It will have 1GB of on-board system memory and 512MB of additional dedicated video/FPGA memory  (the NC353 has 64MB each of them). Both types of memory consist of the same 256Mx16 DDR3 chips – 2 for the system (to use full available memory bus width of 32 bits) and one for the FPGA.

    The main class of the camera applications remains to be a multi-sensor. Even more so – the smallest package of the Zynq 7030 device turned out to have sufficient number of I/Os to accommodate 4 sensor ports – originally I planned only 3 of them. These sensor ports are fully compatible with our current 5MPix sensor boards and with the existent 10359 sensor multiplexer boards – with such multiplexers it will be possible to control up to 12 sensors with a single 10393. Four of the connectors are placed in two pairs on both sides of the PCB, so they overlap on the layout image.

    These 5MPix Aptina sensors have large (by the modern standards) pixels with the pitch of 2.2 microns and that, combined with good quality of the sensor electronics will keep them useful for many of the applications in the future. This backward compatibility will allow us to reduce the amount of hardware needed to be redesigned simultaneously, but of course we are planning to use newer sensors – both existent and those that might be released in the next few years. Thanks to FPGA flexibility, the same sensor board connectors will be able to run alternative types of signals having programmable voltage levels – this will allow us to keep the same camera core current for the years to come.

    Alternative signals are designed to  support serial links with differential signals common in the modern sensors. Each of the connectors can use up 8 lanes plus differential clock, plus I²C and an extra pair of control signals. These four connectors use two FPGA I/O banks (two per bank), each bank has run-time programmable supply voltage to accommodate variety of the sensor signal levels.

    We plan to hold the 10393 files for about a month before releasing them into production of the prototype batch while I will develop the two companion boards. Not very likely, but the development of these additional boards may lead to some last-minute changes to the system board.

    One of them – 10389 will have functionality similar to the current 19369 board – it will provide mass storage (using mSATA SSD), inter-camera synchronization (so we will be able to use these camera modules in Eyesis4π cameras) and back panel I/O connectors, including microUSB, eSATA/USB combo and synchronization in/out. The eSATA/USB combo connector will allow attaching the external storage devices powered by the camera. The same eSATA port will be reconfigurable into the slave mode, so the images/video recorded to the internal mSATA SSD will be transferred to the host computer significantly faster than the main GigE network port allows.

    Another board to develop (10385) is the power supply – I decided to remove the primary DC-DC converter from the system board. Camera uses multiple DC-DC converters – even the processor alone needs several voltage rails, but internally it uses a single regulated 3.3V – all the other (secondary) converters use 3.3V as their input and provide all the other voltages needed. In the 10393 boards most secondary voltages are programmable making it possible to implement “margining” – testing the camera at lower and higher than nominal voltage and making sure it can reliably withstand such variations and is not operating on the very edge of the failure during the production testing. Primary power supply role is to provide a single regulated voltage starting form different sources such as power over the network, battery, wall adapter or some other source. It may need to be isolated or not, the input power quality may be different.

    One reason to separate the primary power supply from the system board is that currently we have about half of the cameras made to be powered over the network, and another half – modified to use lower voltege from the batteries. Currently we order the 10353 boards without any DC-DC converter and later install one of the two types of the converters and make other small changes on the board. Some of our customers do not need any of the primary DC-DC converters – they embed the 10353 boards and provide regulated 3.3V to the modified 10353 board directly. Multi-camera systems can also share primary power supplies. This makes it more convenient to make a power supply as a plug-in module, so the system board itself can be finished in one run.

    Another reason to remove the primary power from the system board is to remove the IEEE 802.3af (PoE) functionality. During the several last years we survived multiple attacks of the “patent trolls” (or NPE – non-practicing entities, how they like to call themselves), but we’ve spent thousands of dollars paid to the lawyers to deal with the trolls – some of the them tried to sell us the license for the already expired patents. One of the still active patents is related to “phantom power “- providing power through the signal lines, similar to how it is done for the microphones since 1919. To avoid the attacks of the trolls in the 10353 cameras we were able to use power over the spare pairs (Alternative B), but that is not possible with GigE which needs all 4 pairs in a cable. We do not believe that using this nearly century-old technology constitutes a genuine invention (maybe tomorrow somebody will “invent” powering SATA devices in the same way? Or already did?) but being a small company we do not have the power to fight in this field and invalidate those patents.

    So the new NC393 made by Elphel will not have the PoE functionality, we will not make, manufacture, sell or market it (at least in GigE mode). But the camera will be PoE-ready, so as soon as the patent will become invalid, it will be possible to add the functionality by just replacing the plug-in module.  And of course our cameras are open and hackable, so our users (in the countries where it is legal, of course – similar to installation of some of the software programs) will be able to build and add such module to their cameras without us.

    Both of these companion boards are already partially designed so I plan that next month we will be able to release the files to production and start building the first prototype system. To test the basic functionality of the  system board the two other ones are not needed – serial debug port (with the embedded USB-to-serial converter) is located on the system board, and 3.3V will be anyway originally provided by a controlled power supply. When everything will be put together the camera will get a well-known but still a nice feature for the autonomous battery-powered  timelapse imaging: it will be able to wake itself up (using alarm signal from the internal clock/calendar that it has anyway), boot,  capture some images and turn the power off virtually completely – until the next alarm.

  • Monday, July 8, 2013 - 23:53
    Your 3D print in the London Science Museum

    The Science Museum in London is producing an exhibition on 3D printing.

    It is intended to feature as part of it's introduction a wall of 3D printed items, of all shapes, sizes, colours and materials.

    To highlight the open and social aspect of 3D printing the Museum's Rohan Mehra would like to invite members of the RepRap community to donate an object to this introductory display.

    Your name would be added to a panel thanking all contributors.

    If you have created a physical 3D printed object you can freely send in, please e-mail Rohan:


    Rohan Mehra
    Exhibition Content Developer
    Science Museum
    London SW7

  • Thursday, July 4, 2013 - 22:19
    RepRap Morgan by Quentin Harley wins the Gada Prize!

    I'm delighted to announce that Quentin Harley's RepRap Morgan design has won the Uplift Interim Personal Manufacturing Prize, the funding for which was most generously provided by Kartik Gada.

    In Second Place was 'Simpson' by Nicholas Seward and in Third Place was '3DPrintMi' by Chris Lau.

  • Thursday, June 27, 2013 - 17:29
    Holger "zecke" Freyther: Using GNU autotest for running unit tests
    This is part of a series of blog posts about testing inside the OpenBSC/Osmocom project. In this post I am focusing on our usage of GNU autotest.

    The GNU autoconf ships with a not well known piece of software. It is called GNU autotest and we will focus about it in this blog post.

    GNU autotest is a very simple framework/test runner. One needs to define a testsuite and this testsuite will launch test applications and record the exit code, stdout and stderr of the test application. It can diff the output with expected one and fail if it is not matching. Like any of the GNU autotools a log file is kept about the execution of each test. This tool can be nicely integrated with automake's make check and make distcheck. This will execute the testsuite and in case of a test failure fail the build.

    The way we use it is also quite simple as well. We create a simple application inside the test/testname directory and most of the time just capture the output on stdout. Currently no unit-testing framework is used, instead a simple application is built that is mostly using OSMO_ASSERT to assert the expectations. In case of a failure the application will abort and print a backtrace. This means that in case of a failure the stdout will not not be as expected and the exit code will be wrong as well and the testcase will be marked as FAILED.

    The following will go through the details of enabling autotest in a project.

    Enabling GNU autotest

    The configure.ac file needs to get a line like this: AC_CONFIG_TESTDIR(tests). It needs to be put after the AC_INIT and AM_INIT_AUTOMAKE directives and make sure AC_OUTPUT lists tests/atlocal

    Integrating with the automake

    The next thing is to define a testsuite inside the tests/Makefile.am. This is some boilerplate code that creates the testsuite and makes sure it is invoked as part of the build process.

     # The `:;' works around a Bash 3.2 bug when the output is not writeable.  
    $(srcdir)/package.m4: $(top_srcdir)/configure.ac
    :;{ \
    echo '# Signature of the current package.' && \
    echo 'm4_define([AT_PACKAGE_NAME],' && \
    echo ' [$(PACKAGE_NAME)])' &&; \
    echo 'm4_define([AT_PACKAGE_TARNAME],' && \
    echo ' [$(PACKAGE_TARNAME)])' && \
    echo 'm4_define([AT_PACKAGE_VERSION],' && \
    echo ' [$(PACKAGE_VERSION)])' && \
    echo 'm4_define([AT_PACKAGE_STRING],' && \
    echo ' [$(PACKAGE_STRING)])' && \
    echo 'm4_define([AT_PACKAGE_BUGREPORT],' && \
    echo ' [$(PACKAGE_BUGREPORT)])'; \
    echo 'm4_define([AT_PACKAGE_URL],' && \
    echo ' [$(PACKAGE_URL)])'; \
    } &>'$(srcdir)/package.m4'
    EXTRA_DIST = testsuite.at $(srcdir)/package.m4 $(TESTSUITE)
    TESTSUITE = $(srcdir)/testsuite
    DISTCLEANFILES = atconfig
    check-local: atconfig $(TESTSUITE)
    installcheck-local: atconfig $(TESTSUITE)
    $(SHELL) '$(TESTSUITE)' AUTOTEST_PATH='$(bindir)' \
    test ! -f '$(TESTSUITE)' || \
    $(SHELL) '$(TESTSUITE)' --clean
    AUTOM4TE = $(SHELL) $(top_srcdir)/missing --run autom4te
    AUTOTEST = $(AUTOM4TE) --language=autotest
    $(TESTSUITE): $(srcdir)/testsuite.at $(srcdir)/package.m4
    $(AUTOTEST) -I '$(srcdir)' -o $@.tmp $@.at
    mv $@.tmp $@

    Defining a testsuite

    The next part is to define which tests will be executed. One needs to create a testsuite.at file with content like the one below:
    AT_BANNER([Regression tests.])
    cat $abs_srcdir/gsm0408/gsm0408_test.ok > expout
    AT_CHECK([$abs_top_builddir/tests/gsm0408/gsm0408_test], [], [expout], [ignore])
    This will initialize the testsuite, create a banner. The lines between AT_SETUP and AT_CLEANUP represent one testcase. In there we are copying the expected output from the source directory into a file called expout and then inside the AT_CHECK directive we specify what to execute and what to do with the output.

    Executing a testsuite and dealing with failure

    The testsuite will be automatically executed as part of make check and make distcheck. It can also be manually executed by entering the test directory and executing the following.

     $ make testsuite  
    make: `testsuite' is up to date.
    $ ./testsuite
    ## ---------------------------------- ##
    ## openbsc test suite. ##
    ## ---------------------------------- ##
    Regression tests.
    1: gsm0408 ok
    2: db ok
    3: channel ok
    4: mgcp ok
    5: gprs ok
    6: bsc-nat ok
    7: bsc-nat-trie ok
    8: si ok
    9: abis ok
    ## ------------- ##
    ## Test results. ##
    ## ------------- ##
    All 9 tests were successful.
    In case of a failure the following information will be printed and can be inspected to understand why things went wrong.
    2: db FAILED (testsuite.at:13)
    ## ------------- ##
    ## Test results. ##
    ## ------------- ##
    ERROR: All 9 tests were run,
    1 failed unexpectedly.
    ## -------------------------- ##
    ## testsuite.log was created. ##
    ## -------------------------- ##
    Please send `tests/testsuite.log' and all information you think might help:
    Subject: [openbsc] testsuite: 2 failed
    You may investigate any problem if you feel able to do so, in which
    case the test suite provides a good starting point. Its output may
    be found below `tests/testsuite.dir'.
    You can go to tests/testsuite.dir and have a look at the failing tests. For each failing test there will be one directory that contains a log file about the run and the output of the application. We are using GNU autotest in libosmocore, libosmo-abis, libosmo-sccp, OpenBSC, osmo-bts and cellmgr_ng.
  • Tuesday, June 11, 2013 - 01:10
    Why is 3D printing such a powerful way to make solid objects?

    Journalists often ask me what is special about 3D printing.  So I answer them.  And then they don't print what I say.  The reason is that they are frightened by mathematics, or that they think that their readers are.

    But you RepRap Blog readers eat mathematical arguments for breakfast.  So here, for the record, is the answer:

    Why is 3D printing such a powerful way to make solid objects?
    To answer this question we first have to ask another: “In what ways can the shape of a solid object be complicated?”
    Often things just get more complicated the more of them there are. If you are a bank with a million customers who have account numbers and your computer has to sort them into order, then that is a more complicated problem than it would be if you only had a thousand customers.
    But shapes aren't just complicated like that. They can be complicated in two other ways as well.

    This picture (thanks to my old friend John Woodwark for the idea for this) shows all three ways that shapes can be complicated:
    1. First there is “combinatorial” complexity – that is the lots-of-bank-account-numbers complexity: a gear wheel has more bits to it than a triangle does;
    2. Second there is “analytical” complexity – triangles are made of straight lines, which have simple equations. But complicated curved shapes have correspondingly complicated equations;
    3. And third there is “dimensional” complexity – a triangle is two-dimensional, but a pyramid is three-dimensional.
    These three complexities are independent. You can have any mix of them.
    But if you want a computer to control a machine to make shapes automatically, then dimensional complexity is the most difficult complexity to deal with. This is because, as the number of dimensions increases, the very nature of shape changes. Here are just a couple of examples:
    1. If you have some points round a 2D circle you can visit them in order one after another; but if you have some points on a 3D sphere, there is no order that places them one after another; and
    2. If you have a piece of string, you can tangle it in 3D (as any kitten will be able to demonstrate); but a piece of string in 2D is simpler, and so cannot be tangled.
    Bearing this in mind, lets look at the difficulty of getting a computer to control machines to make something automatically in two ways: by cutting the thing from a solid block, and by 3D printing the thing layer by layer.

    Here is a turbocharger from a car engine. And the object top left with the yellow tip is a cutting tool that is removing material from a solid block to reveal the turbocharger like a sculptor chiselling a block of marble. To make the turbocharger the computer has to figure out how to move the yellow cutter.
    Rather surprisingly (as we live in a 3D world) the cutter can move in five dimensions. These are the normal three: left-right, front-back, and up-down, plus rotations about those directions. The rotations are needed because the cutter must twist to cut the shape. That totals six dimensions, but rotation about the axis of the cutter itself doesn't count, which leaves five dimensions.
    The computer controlling the cutter needs to work out how to move it around in that five-dimensional space. And not only that, it has to make sure that no part of the cutter (like the conical bit at the top where it attaches to the cutting machine) collides with any un-cut part of the raw block, or with the turbocharger being made.
    This is a very very difficult mathematical and computational problem, and we still (2013) can only solve it for some shapes, even though we know the computer should theoretically be able to be cut out others that we can't (at the moment) solve.

    Now let's look at the turbocharger being made on a 3D printer. This will start at the bottom and build the first layer of the turbocharger. Then it will move up a fraction and build the next layer. And so on.
    The right hand picture shows a layer about half way up, and this is all that the computer has to deal with at each stage – a 2D problem, not a 5D one.
    It is very easy to program a computer to deal with such 2D shapes, and – for this reason – 3D printing machines can make any shape that the physics of the machine can handle, no matter how complicated that shape is. And, unlike with cutting, there is no problem of collisions. The computer always knows that it can move the 3D printer freely above the layer being printed, because there is nothing there yet.
    (In reality, a tiny amount of 3D has to be dealt with: a 3D printer has to put disposable support material under any overhangs, because it can't build layers on thin air. But this is a very easy calculation to do. The computer works out the 2D slices starting at the top of the object and goes downwards. At any level the support material needed is the shape of everything in the layer above minus the shape of everything in this layer. When the computer has done all this, it then reverses the order and builds from the bottom up.)
    This simplicity of the computing and mathematics of 3D printing is the reason that it is humanity's most powerful manufacturing technology: the computer controlling a 3D printer will always have a much easier problem to solve than a computer cutting out the object being made, no matter how complicated the object is. And because of that, 3D printing is by far the most versatile way we have to make things.
  • Saturday, June 8, 2013 - 17:30
    Marcin "hrw" Juszkiewicz: ARMology

    When last time I was in Cambridge we had a discussion about ARM processors. Paweł used term “ARMology” then. And with recent announcement of Cortex-A12 cpu core I thought that it may be a good idea to write a blog post about it.

    Please note that my knowledge of ARM processors started in 2003 so I can make mistakes in everything older. Tried to understand articles about old times but sometimes they do not keep one version of story.

    Ancient times

    ARM1 got released in 1985 as CPU add-on to BBC Micro manufactured by Acorn Computers Ltd. as result of few years of research work. They wanted to have new processor to replace ageing 6502 used in BBC Micro and Acorn Electron and none of existing ones did not fit their requirements. Note that it was not market product but rather development tool made available for selected users.

    But it was ARM2 which landed in new computers — Acorn Archimedes (1987 year). Had multiply instructions added so new version of instruction set was created: ARMv2. Just 8MHz clock but remember that it was first computer with new CPU…

    Then ARM3 came — with cache controller integrated and 25MHz clock. ISA was bumped to ARMv2a due to SWP instruction added. And it was released in another Acorn computer: A5000. This was also used in Acorn A4 which was first ARM powered laptop (but term “ARM Powered” was created few years later). I hope that one day I will be able to play with all those old machines…

    There was also ARM250 processor with ARMv2a instruction set like in ARM3 but no cache controller. But it is worth mentioning as it can be seen as first SoC due to ARM, MEMC, VIDC, IOC chips integrated in one piece of silicon. This allowed to create budget versions of computers.

    ARM Ltd.

    In 1990 Acorn, Apple and VLSI co-founded Advanced RISC Machines Ltd. company which took over research and development of ARM processors. Their business model was simple: “we work on cpu cores and other companies pay us license costs to make chips”.

    Their first cpu was ARM60 with new instruction set: ARMv3. It had 32bit address space (compared to 26bit in older versions), was endian agnostic (so both big and little endian was possible) and there were other improvements.

    Please note lack of ARM4 and ARM5 processors. I heard some rumours about that but will not repeat them here as some of them just do not fit when compared against facts.

    ARM610 was powering Apple Newton PDA and first Acorn RiscPC machines where it was replaced by ARM710 (still ARMv3 instruction set but ~30% faster).

    First licensees

    You can create new processor cores but someone has to buy them and manufacture… In 1992 GEC Plessey and Sharp licensed ARM technology, next year added Cirrus Logic and Texas Instruments, then AKM (Asahi Kasei Microsystems) and Samsung joined in 1994 and then others…

    From that list I recognize only Cirrus Logic (used their crazy EP93xx family), TI and Samsung as vendors of processors ;D


    One of next cpu cores was ARM7TDMI (Thumb+Debug+Multiplier+ICE) which added new instruction set: Thumb.

    The Thumb instructions were not only to improve code density, but also to bring the power of the ARM into cheaper devices which may primarily only have a 16 bit datapath on the circuit board (for 32 bit paths are costlier). When in Thumb mode, the processor executes Thumb instructions. While most of these instructions directly map onto normal ARM instructions, the space saving is by reducing the number of options and possibilities available — for example, conditional execution is lost, only branches can be conditional. Fewer registers can be directly accessed in many instructions, etc. However, given all of this, good Thumb code can perform extremely well in a 16 bit world (as each instruction is a 16 bit entity and can be loaded directly).

    ARM7TDMI landed nearly everywhere – MP3 players, cell phones, microwaves and any place where microcontroller could be used. I heard that few years ago half of ARM Ltd. income was from license costs of this cpu core…


    But ARM7 did not ended at ARM7TDMI… There was ARM7EJ-S core which used ARMv5TE instruction set and also ARM720T and ARM740T with ARMv4T. You can run Linux on Cirrus Logic CLPS711x/EP721x/EP731x ones ;)

    According to ARM Ltd. page about ARM7 the ARM7 family is the world’s most widely used 32-bit embedded processor family, with more than 170 silicon licensees and over 10 Billion units shipped since its introduction in 1994.


    I heard that ARM8 is one of those things you should not ask ARM Ltd. people about. Nothing strange when you look at history…

    ARM810 processor made use of ARMv4 instruction set and had 72MHz clock. At same time DEC released StrongARM with 200MHz clock… 1996 was definitively year of StrongARM.

    In 2004 I bought my first Linux/ARM powered device: Sharp Zaurus SL-5500.


    Ah ARM9… this was huge family of processor cores…

    ARM moved from a von Neumann architecture (Princeton architecture) to a Harvard architecture with separate instruction and data buses (and caches), significantly increasing its potential speed.

    There were two different instruction sets used in this family: ARMv4T and ARMv5TE. Also some kind of Java support was added in the latter one but who knows how to use it — ARM keeps details of Jazelle behind doors which can be open only with huge amount of money.


    Here we have ARM9TDMI, ARM920T, ARM922T, ARM925T and ARM940T cores. I mostly saw 920T one in far too many chips.

    My collection includes:

    • ep93xx from Cirrus Logic (with their sick VFP unit)
    • omap1510 from Texas Instruments
    • s3c2410 from Samsung (note that some s3c2xxx processors are ARMv5T)


    Note: by ARMv5T I mean every cpu never mind which extensions it has built-in (Enhanced DSP, Jazelle etc).

    I consider this one to be most popular one (probably after ARM7TDMI). Countless companies had own processors based on those cores (mostly on ARM926EJ-S one). You can get them even in QFP form so hand soldering is possible. CPU frequency goes over 1GHz with Kirkwood cores from Marvell.

    In my collection I have:

    • at91sam9263 from Atmel
    • pxa255 from Intel
    • st88n15 from ST Microelectronics

    Had also at91sam9m10, Kirkwood based Sheevaplug and ixp425 based NSLU2 but they found new home.


    Another quiet moment in ARM history. ARM1020E, ARM1022E, ARM1026EJ-S cores existed but did not looked popular.

    UPDATE: Conexant uses ARM10 core in their next generation DSL CPE systems such as bridge/routers, wireless DSL routers and DSL VoIP IADs.


    Released in 2002 as four new cores: ARM1136J, ARM1156T2, ARM1176JZ and ARM11 MPCore. Several improvements over ARM9 family including optional VFP unit. New instruction set: ARMv6 (and ARMv6K extensions). There was also Thumb2 support in arm1156 core (but I do not know did someone made chips with it). arm1176 core got TrustZone support.

    I have:

    • omap2430 from Texas Instruments
    • i.mx35 from Freescale

    Currently most popular chip with this family is BCM2835 GPU which got arm1136 cpu core on die because there was some space left and none of Cortex-A processor core fit there.


    New family of processor cores was announced in 2004 with Cortex-M3 as first cpu. There are three branches:

    • Aplication
    • Realtime
    • Microcontroller

    All of them (with exception of Cortex-M0 which is ARMv6) use new instruction sets: ARMv7 and Thumb-2 (some from R/M lines are Thumb-2 only). Several cpu modules were announced (some with newer cores):

    • NEON for SIMD operations
    • VFP3 and VFP4
    • Jazelle RCT (aka ThumbEE).
    • LPAE for more then 4GB ram support (Cortex A7/12/15)
    • virtualization support (A7/12/15)
    • big.LITTLE
    • TrustZone

    I will not cover R/M lines as did not played with them.


    Announced in 2006 single core ARMv7a processor core. Released in chips by Texas Instruments, Samsung, Allwinner, Apple, Freescale, Rockchip and probably few others.

    Has higher clocks than ARM11 cores and achieves roughly twice the instructions executed per clock cycle due to dual-issue superscalar design.

    So far collected:

    • am3358 from Texas Instruments
    • i.mx515 from Freescale
    • omap3530 from Texas Instruments


    First multiple core design in Cortex family. Allows up to 4 cores in one processor. Announced in 2007. Looks like most of companies which had previous cores licensed also this one but there were also new vendors.

    There are also single core Cortex-A9 processors on a market.

    I have products based on omap4430 from Texas Instruments and Tegra3 from NVidia.


    Announced around the end of 2009 (I remember discussion about something new from ARM with someone at ELC/E). Up to 4 cores, mostly for use in all designs where ARM9 and ARM11 cores were used. In other words new low-end cpu with modern instruction set.


    The fastest (so far) core in ARMv7a part of Cortex family. Up to 4 cores. Announced in 2010 and expanded ARM line with several new things:

    • 40-bit LPAE which extends address range to 1TB (but 32-bit per process)
    • VFPv4
    • Hardware virtualization support
    • TrustZone security extensions

    I have Chromebook with Exynos5250 cpu and have to admit that it is best device for ARM software development. Fast, portable and hackable.


    Announced in 2011. Younger brother of Cortex-A15 design. Slower but eats much less power.


    Announced in 2013 as modern replacement for Cortex-A9 designs. Has everything from Cortex-A15/A7 and is ~40% faster than Cortex-A9 at same clock frequency. No chips on a market yet.


    That’s interesting part which was announced in 2011. It is not new core but combination of them. Vendor can mix Cortex-A7/12/15 cores to have kind of dual-multicore processor which runs different cores for different needs. For example normal operation on A7 to save energy but go up for A15 when more processing power is needed. And amount of cores in each of them does not even have to match.

    It is also possible to make use of all cores all together which may result in 8-core ARM processor scheduling tasks on different cpu cores.

    There are few implementations already: ARM TC2 testing platform, HiSilicon K3V3, Samsung Exynos 5 Octa and Renesas Mobile MP6530 were announced. They differ in amount of cores but all (except TC2) use the same amount of A7/A15 cores.


    In 2011 ARM announced new 64-bit architecture called AArch64. There will be two cores: Cortex-A53 and Cortex-A57 and big.LITTLE combination will be possible as well.

    Lot of things got changed here. VFP and NEON are parts of standard. Lot of work went into making sure that all designs will not be so fragmented like 32-bit architecture is.

    I worked on AArch64 bootstrapping in OpenEmbedded build system and did also porting of several applications.

    Hope to see hardware in 2014 with possibility to play with it to check how it will play compared to current systems.

    Other designs

    ARM Ltd. is not the only company which releases new cpu cores. That’s due to fact that there are few types of license you can buy. Most vendors just buy licence for existing core and make use of it in their designs. But some companies (Intel, Marvell, Qualcomm, Microsoft, Apple, Faraday and others) paid for ‘architectural license’ which allows to design own cores.


    Probably oldest one was StrongARM made by DEC, later sold to Intel where it was used as a base for XScale family with ARMv5TEJ instruction set. Later IWMMXT got added in PXA27x line.

    In 2006 Intel sold whole ARM line to Marvell which released newer processor lines and later moved to own designs.

    There were few lines in this family:

    • Application Processors (with the prefix PXA).
    • I/O Processors (with the prefix IOP)
    • Network Processors (with the prefix IXP)
    • Control Plane Processors (with the prefix IXC).
    • Consumer Electronics Processors (with the prefix CE).

    One day I will undust my Sharp Zaurus c760 just to check how recent kernels work on PXA255 ;D


    Their Feroceon/PJ1/PJ4 cores were independent ARMv5TE implementations. Feroceon was Marvell’s own ARM9 compatible CPU in Kirkwood and others, while PJ1 was based on that and replaced XScale in later PXA chips. PJ4 is the ARMv7 compatible version used in all modern Marvell designs, both the embedded and the PXA side.


    Company known mostly from wireless networks (GSM/CDMA/3G) released first ARM based processors in 2007. First ones were based on ARM11 core (ARMv6 instruction set) and in next year also ARMv7a were available. Their high-end designs (Scorpion and Krait) are similar to Cortex family but have different performance. Company also has Cortex-A5 and A7 in low-end products.

    Nexus 4 uses Snapdragon S4 Pro and I also have S4 Plus based Snapdragon development board.


    Faraday Technology Corporation released own processors which used ARMv4 instruction set (ARMv5TE in newer cores). They were FA510, FA526, FA626 for v4 and FA606TE, FA626TE, FMP626TE and FA726TE for v5te. Note that FMP626TE is dual core!

    They also have license for Cortex-A5 and A9 cores.

    Project Denver

    Quoting Wikipedia article about Project Denver:

    Project Denver is an ARM architecture CPU being designed by Nvidia, targeted at personal computers, servers, and supercomputers. The CPU package will include an Nvidia GPU on-chip.

    The existence of Project Denver was revealed at the 2011 Consumer Electronics Show. In a March 4, 2011 Q&A article CEO Jen-Hsun Huang revealed that Project Denver is a five year 64-bit ARM architecture CPU development on which hundreds of engineers had already worked for three and half years and which also has 32-bit ARM architecture backward compatibility.

    The Project Denver CPU may internally translate the ARM instructions to an internal instruction set, using firmware in the CPU.


    AppliedMicro announced that they will release AArch64 processors based on own cores.

    Final note

    If you spotted any mistakes please write in comments and I will do my best to fix them. If you have something interesting to add also please do a comment.

    I used several sources to collect data for this post. Wikipedia articles helped me with details about Acorn products and ARM listings. ARM infocenter provided other information. Dates were taken from Wikipedia or ARM Company Milestones page. Ancient times part based on The ARM Family and The history of the ARM CPU articles. The history of the ARM architecture was interesting and helpful as well.

    Please do not copy this article without providing author information. Took me quite long time to finish it.


    8 June evening

    Thanks to notes from Arnd Bergmann I did some changes:

    • added ARM7, Marvell, Faraday, Project Denver, X-Gene sections
    • fixed Cortex-A5 to be up to 4 cores instead of single.
    • mentioned Conexant in ARM10 section.
    • improved Qualcomm section to mention which cores are original ARM ones, which are modified.

    David Alan Gilbert mentioned that ARM1 was not freely available on a market. Added note about it.

    All rights reserved © Marcin Juszkiewicz
    ARMology was originally posted on Marcin Juszkiewicz website

  • Thursday, June 6, 2013 - 12:13
    SlyBlog: OpenPhoenux LinuxTag 2013 Impressions

    The LinuxTag 2013 is over, and I want to share some brief impressions I got during our stay in Berlin.

    The LinuxTag is a nice and well organized FOSS exhibition in Germany, attracting more than 10.000 visitors during 4 days.

    We gave a talk about the OpenPhoenux project at the 2nd evening and had about 60 listeners. Some of them got very interested and followed us to the booth afterwards. For everyone who couldn’t participate, the slides are available online: Slides.pdf

    We shared a booth with some other “Linux & Embedded” projects, namely: OpenEmbedded, Ethernut, Nut/OS, Oswald/Metawatch. Our Booth was professionally looking and I think we got quite some people interested in the project. Basically we had a constant flow of people at the booth during our 3 days stay and the overall feedback was rather positive!

    OpenPhoenux LinuxTag 2013 (1) OpenPhoenux LinuxTag 2013 (2) OpenPhoenux LinuxTag 2013 (3)

    We got interviewed by the “GNU funzt!” team, as well. The (german) video is now available on Youtube (OpenPhoenux interview is starting at 5:00):

    All in all it was a very nice stay in Berlin. I especially enjoyed meeting and chatting with guys who already owned a GTA04. It looks like the community is growing again!


  • Thursday, June 6, 2013 - 00:54
    Elphel new camera calibration facility

    Fig.1. Elphel new calibration pattern

    Elphel has moved to a new calibration facility in May 2013. The new office is designed with the calibration room being it’s  most important space, expandable when needed to the size of  the whole office with the use of wide garage door.  Back wall in the new calibration room is covered with the large, 7m x 3m  pattern, illuminated with bright fluorescent lights.  The length of the room allows to position the calibration machine 7.5  meters away from the pattern. The long space and large pattern will allow to calibrate Eyesis4π positioned far enough from the pattern to be withing depth of field of its lenses focused for infinity, while still keeping wide angular size, preferred for accuracy of measurements.

    We already hit the precision limits using the previous, smaller pattern 2.7m x 3.0m. While the software was designed to accommodate for the pattern where each of the nodes had to have individually corrected position (from the flat uniform grid), the process assumed that the 3d coordinates of the nodes do not change between measurements.

    The main problem with the old pattern was that the material it was printed on was attached to the wall along the top edge but still had a freedom to slightly move perpendicular to the wall. We noticed that while combining measurements made at different time, as most of our cameras need to be calibrated at several “stations” – positions relative to the target (rotation around 2 axes is performed automatically). We ran calibration during night time to reduce variations caused by vibrations in the building, so next station measurements were performed at different dates. Modified software was able to deal with variations in Z (perpendicular to the surface) direction between station measurements (that actually did help in the overall adjustment of variables), but the shape of the target pattern could change if the temperature in the building was changing during measurements. The PVC material has high thermal expansion, and small expansion in the X,Y directions could cause much higher variations perpendicular when the target is attached to the wall with lower thermal coefficient in multiple points.

    Fig. 2. Floor plan

    Calibration Room

    The new space is designed to accommodate various camera calibration procedures.

    • First of all we made the pattern as large as possible – it is 7,01m x 3.07m – we even raised the ceiling near the target.
    • The target itself is now printed on the film attached to the wall as a wallpaper, so there is no movement relative to the wall, and thermal expansion is defined by a lower coefficient of the drywall. We also provided the air channels inside the wall to make it possible to implement thermal stabilization of the wall.
    • The calibration room allows to move camera under test up to 7.5m away from the pattern, the room is separated from the rest of the facility with the wide “garage” door, so changing the lighting conditions outside of the room do not influence calibration.
    • Other rooms are designed in such a way that the camera can be moved up to 24 meters from the target (with the garage door open) and have unobstructed view of virtually the full pattern – that may be needed for the long focal length lenses.

    Fig. 3. Pattern wall during construction

    Preparing the wall for the target pattern

    During construction of the new facility we were carefully watching the progress as our temporary space was located just on the next floor and we were mostly concerned about the quality of the target wall. Yes, software can accommodate for the non-flatness of the wall but it is better to start with the good “hardware” – to achieve subpixel precision the software averages correlation over rather large areas of the image (currently 64×64 pixels) so sharp variations will produce different measurements from different distances or viewing angles. When we first measured the wall flatness, we noticed large steps between the gypsum board panels, so the construction people promised to make it level 5 finish and flatten the surface. They put “mud” all over the wall, sanded it and that removed all of the sharp discontinuities on the target surface, but still leaving some smooth ones up to ±3mm as we measured later with the camera.

    When the wall was made flat it had to be prepared for application of the self-adhesive vinyl film, so the wall finish will not make it bubble later. Ideally we wanted it to be able to withstand peeling off the film if we’ll have to do that. When we searched Internet about vinyl film application to the painted wall we found that most fresh paint needs some 60(!) days to cure before the film can be applied. So we decided to go with two-component epoxy paint that requires only one week before the film can be applied. When we inspected that epoxy painted wall (the paint was applied with the regular rollers) – it did not look flat. Well, it was just a roller-painted wall, so it had those small bumps and we were concerned that the vinyl film will conform to these bumps, and if it will – the position “noise” will be higher than what cameras can resolve.  So we’ve got more epoxy paint and started a long process of wet-sanding and application of the new paint coats. We have compressed air (used to blow during optical and mechanical assembly) so we thought we’ll just spray the paint instead of rolling it to avoid those bumps that were left even after professional work. Unfortunately, without the needed experience in spray-painting, we adjusted pressure too high, and probably as much as a half of our first coat ended somewhere else, but not on the sprayed wall – the paint droplets were too small. Next coat was better, and in several days we had a wall that seemed to be covered with hard plastic laminate, not just painted.

    Installing the pattern

    Our next concern was – how to install the vinyl film? We wanted to have very good match between the individual panels, as it is not possible to have the target printed on a single piece,  maximal width of which is just over 1.5m. We hesitated to order professional installation because for regular applications (like vehicle wraps) such sub-millimeter precision is not required. For the really seamless (compared to the precision of the calibration) we needed better than 0.1mm match, but it is possible to just mask out the grid nodes around the seams and disregard them during calibration data processing, so we planned to get to about 0.5mm match.

    Fig.4. Pattern Z-deviations (perpendicular to the target plane)

    Fig. 5. Pattern deviations in X,Y plane

    Fig. 6. Pattern deviation from the "ideal" grid (horizontal profile)

    We knew people are doing that but still it seemed very  difficult to apply 1.5m wide by 3m long “stickers” without wrinkles and bubbles. Web search provided multiple recommendations, but the main thing was to use “wet” method that none of us new before. It involves spraying the wall (and the film on the adhesive side) with “application fluid” (basically water with small addition of soap and alcohol). When the sticky film is applied to the wet surface, the adhesive is temporarily inhibited and it is possible to reposition (slide) the film to achieve required match. Then the water is squeezed away with the squeegee tools, and if done properly, there should be no bubbles left.

    Geometric properties of the pattern

    The Z-deviations on Fig. 4 show the wall non-flatness, the gypsum panel borders are still visible (even with “level 5″ finish), the horizontal discontinuity near the top is where the wall was extended to accommodate increased ceiling height. Positive Z direction is away from the camera, so lighter areas are concave areas on the wall and darker are bumps extending out from the wall.

    Fig.5. illustrates mismatch and stretching of the vinyl panels application. Red/green color difference corresponds to the horizontal shift, while blue/green – the vertical one.

    Figure 6. contains a horizontal profile at the half-height and provides numerical values of the deviations. Diff. Error plot  indicates areas around panel boundaries that should be avoided during reprojection errors minimization and measuring point spread functions (PSF) for aberration correction.

    Illuminating the target pattern

    We use the same pattern for different parts of the camera calibration. Correction of aberrations and distortions does not impose strict requirements on the illumination of the pattern, but we use the same images to measure (and compensate) lens vignetting and color variations of the camera sensitivity caused among other reasons by the multilayer infrared cutoff filter and angular variations of the pixel color sensitivity. This method works for low-frequency part of the flat field correction and does not deal with the pixel fixed-pattern noise that, if present should be corrected by other means.

    Fig. 7. pattern brightness for station 2 view 0 (top channels)

    Fig. 8. pattern brightness for station 2 view 0 (top), specular component

    Fig. 9. pattern brightness for station 2 view 1 (bottom channels)

    Fig. 10. pattern brightness for station 2 view 1 (bottom channels), specular

    Acquiring thousands of images made by different channels of the camera and capturing the same target, it is possible to perform simultaneous relative photometric calibration of the pattern and the sensors, provided that each element of the pattern preserves the same brightness for each image where it is captured. This may be true when the target is observed from the same point, but  when we calibrate Eyesis4π camera with 2 sensors attached far from the other ones, and these sensors travel significantly when capturing the target, this assumption does not hold.  The same pattern element has different brightness depending on the lens position when the image is acquired. This is because even matte pattern material is not perfectly diffusive, there is some specular (reflective) component.

    In the earlier setup we used photographic lamps with large umbrellas, but these umbrellas were still small when placed at a distance that they were out of the camera view. Specular component was still visible when the diffusive part was subtracted. When designing the new calibration target we decided to use bright linear fluorescent lamps along the floor and the ceiling and keep them spatially compact without any diffusers or umbrellas, we only used mirrors behind the lamps to effectively double the output. Such light source was expected to produce specular reflections on the target, but these reflections occupy just a small portion of the target surface, the rest of it is close to be pure diffusive. That allowed us to locate positions of the specular reflections for each camera station/viewpoint by subtracting the average (between all stations/viewpoints) pattern brightness from each individual station/view of the pattern and then masking out this areas of the pattern during flat-field calculations.

    Images on Fig. 7-10 were made for camera station 2 – 3.3m from the target and 1.55m to the right of the target center, that caused lamp reflections to be shifted to the left. View 0 (Fig. 7-8) correspond to the camera head, which is the center of rotations. View 1 (Fig. 9-10) is captured by the camera 2 bottom sensors mounted 820 mm below the camera head, so they were moving significantly between the images – that caused visible curvature on the top lamps reflection.

    Virtual tour of Elphel calibration facility

    You may walk through our calibration facility using our WebGL viewer/editor. The images were captured with newly calibrated Eyesis4π camera, there is no 3-d parallax correction – these are just raw panoramas stitched for infinity and most close objects are out of depth-of-field of the lenses. Hope you’ll still enjoy this snapshot of the new facility were we plan to develop and precisely calibrate many new cameras.

  • Friday, May 31, 2013 - 10:16

    THSF 2013, C'est fini

    RE(Merci) A Toutes et Tous !!! A l'année prochaine !

    Partagez vos photos et vidéos en rajoutant un lien sur ce pad: pad.tetalab.org/p/thsf-2013-medias

    Venez prendre un verre jeudi 13 juin pour la sortie du numéro 26 du magazine Multiprise dans lequel un article est consacré au THSF.


    >>> Prochains RDV Tetalab : Tous les Mercredi Soir et le reste de la semaine !

    Qui ? Ouvert à tous le mercredi (apporter des bières ou autres goodies à partager) et aux membres le reste de la semaine. Inscriptions ouvertes, RTFM Tetalab : rtfm.tetalab.org.

    Quoi ? Au programme : 3D Print (initiation, discussions), Arduino & Servos, Oscillo, Processing, Firefox OS, LeapMotion, Eagle, Prism, PureData, Raspberry Pi, Le Bit et le Couteau, Retrogaming, OHM2013, La Novela, neutralité du net, Hadopi, Wikileaks, ...

    Pourquoi ? Pour apprendre, partager, se rencontrer, faire connaissance avec Tetaneutral.net, avancer sur des projets, ...

    Quand ? Tous les mercredi soirs à partir de 21h, et Samedi 15 Juin 2013 à partir de 10h.

    Comment ? Apportez vos milliards de neuronnes, un laptop, des croissants, un smile.

    Où ? Au container à Mix'Art Myrys , 12 rue Ferdinand Lassalle, 31200 Toulouse

  • Saturday, May 25, 2013 - 01:24
    Like a glue gun on a robot arm...

    We've always described the RepRap in simplistic terms as being like a glue gun on a robot arm. Well, someone has taken this rather literally. It's actually quite invigorating to see people attempting to make more challenging structures than stacks of 2D planar laminations, and we've seen people using repraps to do this before, but the Mataerial people seem to have it worked out pretty well.

  • Monday, May 13, 2013 - 01:54
    3D Printing Where It Needs To Be

    Here is a remarkable achievement  Follow this link for details.

    To quote Vik Olliver: "3D Printing Where It Needs To Be."

    If the thousands of people involved in RepRap each contributed a few dollars, or twenty minutes of their expertise, think what this would become...

  • Wednesday, April 24, 2013 - 16:42
    BotQueue v0.3 - Now with Webcams, Pausing, and More!

    Coming quickly on the heels of the last release, the latest v0.3 release of BotQueue adds some really exciting new features that make it much nicer to use.  The coolest new feature is webcam support.  The client can now upload pictures of your machine while it is printing and show it on the BotQueue.com website.  This means you can watch and control your bot from anywhere you have an internet connection using any device you want - computer, laptop, smartphone, or tablet.

    Read more about it on the release post or head over to BotQueue.com to try it out.  Works great with RepRap printers.  100% open source guaranteed.
  • Tuesday, April 23, 2013 - 20:34
    Compte-rendu de l'Hardware Freedom Day 2013

    Le samedi 20 avril 2013, nous avons reçu une cinquantaine de personnes à l'occasion du Premier HFDAY
    . De quoi faire une étincelle de plus en France au sujet de l'Open-Source Hardware (Matériel Libre).



    Un grand Merci à tous ceux qui sont venu ou ont soutenu la journée en relayant l'information. tTh, Philippe, Léon (le cyborg), EricDuino (merci pour les photos ci-dessous d'ailleurs), Seb pour la RepRap blanche, Mix'Art-Myrys pour l'espace et le soutien moral.

    P1020410RepRap-France.com est passé faire un tour. Merci pour les discussions sur les business-models et le futur de l'impression 3D. Franck à droite, à monté avec Guilhem (pas là) une des premières entreprises françaises basée sur les imprimantes "open hardware" REPRAP.

    Img 0131Img 0133bImg 0134

    Img 0135Img 0138Img 0140

    A l'année prochaine pour une deuxième édition bien mieux préparée ! =)

    -David Venancio / [340]metabaron

  • Wednesday, April 17, 2013 - 16:49
    Première Journée Internationale du Matériel Libre / Open-Source Hardware

    Hfd Banner2 Oshw Logo 200 Px

    A l'initiative de la Digital Freedom Foundation, 66 Hackerspaces organisent une journée portes-ouvertes le Samedi 20 Avril 2013 sur le Matériel Libre ou Open-Source Hardware, concept novateur qui ouvre à tous et toutes, la possibilité de construire librement ses propres outils.

    Le Tetalab soutenu par Mix'Art-Myrys présentera de 10h à 18h plusieurs imprimantes 3D RepRap, des vidéo projections et documents sur le sujet. Curieux en tous genres, passionnés d'Open-Source, bricoleurs et bricoleuses hi-tech, ou low-tech, vous êtes tous et toutes bienvenues.

    Hfd A4 Fr

    Hfd 470x60

  • Thursday, April 4, 2013 - 22:07
    Inside 3D Printing Conference & Expo

    The Inside 3D Printing Conference & Expo, April 22-23 in NYC, has attracted 3D printing companies, professionals, industry leaders, and hobbyists who will meet to discuss the latest topics and advancements in the ever-evolving 3D printing field.

    You’ll hear presentations by leaders in the field—Avi Reichental, President and CEO of 3D Systems, Hod Lipson, associate professor at Cornell University and coauthor of Fabricated: The World of 3D Printing, and Ofer Shochet, Executive VP of Products at Statasys. View the full agenda here.

    Featured Session:

    How Professional Investors Are Playing the 3D Printing Boom
    A panel of venture capitalists, including professionals from T. Rowe Price Associates, Lux Capital, RRE Ventures, and Piper Jaffray, will explore where investors are placing their bets. You’ll learn what types of startups VCs are interested in funding and where to invest your own money in this emerging industry.

    The event’s exhibit hall and networking reception will provide attendants with an exciting opportunity to meet face-to-face with companies in the space.
    Reprapers will save 15% off gold passports to the event with the code: RRP15. For the best rates, register today: Thursday, April 4.

  • Sunday, March 24, 2013 - 22:04
    Stop bad 3D printing patents

    The EFF and Ask Patents are organising the submission of prior-art to stop 3D printing patents that shouldn't be awarded.  Details are here:


    There is a vast wealth of research and prior art in the RepRap community, on its Wiki, and in its blogs.  All this material is in the public domain, and any attempt to patent any of it should not, therefore, be allowed.

    Please take a minute every now and then to visit the site and see if some company is trying to patent something that you know is already public.  And fill in the form with a reference (or better a link) to any prior art that you know.