Empty

Total: 0,00 €

h:D

Planet

  • Monday, December 16, 2013 - 12:20
    Chris Lord: Linking CSS properties with scroll position: A proposal

    As I, and many others have written before, on mobile, rendering/processing of JS is done asynchronously to responding to the user scrolling, so that we can maintain touch response and screen update. We basically have no chance of consistently hitting 60fps if we don’t do this (and you can witness what happens if you don’t by running desktop Firefox (for now)). This does mean, however, that you end up with bugs like this, where people respond in JavaScript to the scroll position changing and end up with jerky animation because there are no guarantees about the frequency or timeliness of scroll position updates. It also means that neat parallax sites like this can’t be done in quite the same way on mobile. Although this is currently only a problem on mobile, this will eventually affect desktop too. I believe that Internet Explorer already uses asynchronous composition on the desktop, and I think that’s the way we’re going in Firefox too. It’d be great to have a solution for this problem first.

    It’s obvious that we could do with a way of declaring a link between a CSS property and the scroll position. My immediate thought is to do this via CSS. I had this idea for a syntax:

    scroll-transition-(x|y): <transition-declaration> [, <transition-declaration>]*
    
        where transition-declaration = <property>( <transition-stop> [, <transition-stop>]+ )
          and transition-stop        = <relative-scroll-position> <property-value>

    This would work quite similarly to standard transitions, where a limited number of properties would be supported, and perhaps their interpolation could be defined in the same way too. Relative scroll position is 0px when the scroll position of the particular axis matches the element’s offset position. This would lead to declarations like this:

    scroll-transition-y: opacity( 0px 0%, 100px 100%, 200px 0% ), transform( 0px scale(1%), 100px scale(100%), 200px scale(1%);

    This would define a transition that would grow and fade in an element as the user scrolled it towards 100px down the page, then shrink and fade out as you scrolled beyond that point.

    But then Paul Rouget made me aware that Anthony Ricaud had the same idea, but instead of this slightly arcane syntax, to tie it to CSS animation keyframes. I think this is more easily implemented (at least in Firefox’s case), more flexible and more easily expressed by designers too. Much like transitions and animations, these need not be mutually exclusive though, I suppose (though the interactions between them might mean as a platform developer, it’d be in my best interests to suggest that they should :)).

    I’m not aware of any proposal of this suggestion, so I’ll describe the syntax that I would expect. I think it should inherit from the CSS animation spec, but prefix the animation-* properties with scroll-. Instead of animation-duration, you would have scroll-animation-bounds. scroll-animation-bounds would describe a vector, the distance along which would determine the position of the animation. Imagine that this vector was actually a plane, that extended infinitely, perpendicular to its direction of travel; your distance along the vector is unaffected by your distance to the vector. In other words, if you had a scroll-animation-bounds that described a line going straight down, your horizontal scroll position wouldn’t affect the animation. Animation keyframes would be defined in the exact same way.

    [Edit] Paul Rouget makes the suggestion that rather than having a prefixed copy of animation, that a new property be introduced, animation-controller, of which the default would be time, but a new option could be scroll. We would still need an equivalent to duration, so I would re-purpose my above-suggested property as animation-scroll-bounds.

    What do people think about either of these suggestions? I’d love to hear some conversation/suggestions/criticisms in the comments, after which perhaps I can submit a revised proposal and begin an implementation.

  • Friday, December 13, 2013 - 13:32
    Open Bidouille Camp 2013

    header.obc

    Nous serons présents à l'Open Bidouille Camp ce Dimanche 15 Décembre à Montreuil (93) pour animer un atelier de soudure de Dominoux.

    Ouvert à tous il donne aux petits et grands la possibilité d'apprendre à souder tout en s'amusant  !

    Vous retrouverez sur le site de l'OBC toutes les informations importantes pour participer à l’événement et connaitre tous les intervenants présents.

     

    Venez nombreux !

     

  • Monday, December 2, 2013 - 16:05
    RS Components distributing RepRaps

    This blog is for the RepRap Project, and so I do not normally post information here about the activities of our company, RepRapPro Ltd.  See our company blog for that sort of thing.

    No.  The reason for this post is that from today a seriously major international company - RS Components, the world’s largest distributor of electronics and
    maintenance products - will be stocking and selling completely open-source RepRap kits.  And in the future they hope to be selling components for RepRaps.  In particular they want to sell vitamins-only kits so that people can print their own RepRaps.

    For more details see RS's blog post here, and, of course, their catalogue here.

  • Friday, November 29, 2013 - 15:31
    Chris Lord: Efficient animation for games on the (mobile) web

    Drawing on some of my limited HTML5 games experience, and marginally less limited general games and app writing experience, I’d like to write a bit about efficient animation for games on the web. I usually prefer to write about my experiences, rather than just straight advice-giving, so I apologise profusely for how condescending this will likely sound. I’ll try to improve in the future :)

    There are a few things worth knowing that will really help your game (or indeed app) run better and use less battery life, especially on low-end devices. I think it’s worth getting some of these things down, as there’s evidence to suggest (in popular and widely-used UI libraries, for example) that it isn’t necessarily common knowledge. I’d also love to know if I’m just being delightfully/frustratingly naive in my assumptions.

    First off, let’s get the basic stuff out of the way.

    Help the browser help you

    If you’re using DOM for your UI, which I’d certainly recommend, you really ought to use CSS transitions and/or animations, rather than JavaScript-powered animations. Though JS animations can be easier to express at times, unless you have a great need to synchronise UI animation state with game animation state, you’re unlikely to be able to do a better job than the browser. The reason for this is that CSS transitions/animations are much higher level than JavaScript, and express a very specific intent. Because of this, the browser can make some assumptions that it can’t easily make when you’re manually tweaking values in JavaScript. To take a concrete example, if you start a CSS transition to move something from off-screen so that it’s fully visible on-screen, the browser knows that the related content will end up completely visible to the user and can pre-render that content. When you animate position with JavaScript, the browser can’t easily make that same assumption, and so you might end up causing it to draw only the newly-exposed region of content, which may introduce slow-down. There are signals at the beginning and end of animations that allow you to attach JS callbacks and form a rudimentary form of synchronisation (though there are no guarantees on how promptly these callbacks will happen).

    Speaking of assumptions the browser can make, you want to avoid causing it to have to relayout during animations. In this vein, it’s worth trying to stick to animating only transform and opacity properties. Though some browsers make some effort for other properties to be fast, these are pretty much the only ones semi-guaranteed to be fast across all browsers. Something to be careful of is that overflow may end up causing relayouting, or other expensive calculations. If you’re setting a transform on something that would overlap its container’s bounds, you may want to set overflow: hidden on that container for the duration of the animation.

    Use requestAnimationFrame

    When you’re animating canvas content, or when your DOM animations absolutely must synchronise with canvas content animations, do make sure to use requestAnimationFrame. Assuming you’re running in an arbitrary browsing session, you can never really know how long the browser will take to draw a particular frame. requestAnimationFrame causes the browser to redraw and call your function before that frame gets to the screen. The downside of using this vs. setTimeout, is that your animations must be time-based instead of frame-based. i.e. you must keep track of time and set your animation properties based on elapsed time. requestAnimationFrame includes a time-stamp in its callback function prototype, which you most definitely should use (as opposed to using the Date object), as this will be the time the frame began rendering, and ought to make your animations look more fluid. You may have a callback that ends up looking something like this:

    var startTime = -1;
    var animationLength = 2000; // Animation length in milliseconds
    
    function doAnimation(timestamp) {
     // Calculate animation progress
     var progress = 0;
     if (startTime < 0) {
       startTime = timestamp;
     } else {
       progress = Math.min(1.0, animationLength /
                                  (timestamp - startTime));
     }
    
     // Do animation ...
    
     if (progress < 1.0) {
       requestAnimationFrame(doAnimation);
     }
    }
    
    // Start animation
    requestAnimationFrame(doAnimation);

    You’ll note that I set startTime to -1 at the beginning, when I could just as easily set the time using the Date object and avoid the extra code in the animation callback. I do this so that any setup or processes that happen between the start of the animation and the callback being processed don’t affect the start of the animation, and so that all the animations I start before the frame is processed are synchronised.

    To save battery life, it’s best to only draw when there are things going on, so that would mean calling requestAnimationFrame (or your refresh function, which in turn calls that) in response to events happening in your game. Unfortunately, this makes it very easy to end up drawing things multiple times per frame. I would recommend keeping track of when requestAnimationFrame has been called and only having a single handler for it. As far as I know, there aren’t solid guarantees of what order things will be called in with requestAnimationFrame (though in my experience, it’s in the order in which they were requested), so this also helps cut out any ambiguity. An easy way to do this is to declare your own refresh function that sets a flag when it calls requestAnimationFrame. When the callback is executed, you can unset that flag so that calls to that function will request a new frame again, like this:

    function redraw() {
      drawPending = false;
    
      // Do drawing ...
    }
    
    var drawPending = false;
    function requestRedraw() {
      if (!drawPending) {
        drawPending = true;
        requestAnimationFrame(redraw);
      }
    }

    Following this pattern, or something similar, means that no matter how many times you call requestRedraw, your drawing function will only be called once per frame.

    Remember, that when you do drawing in requestAnimationFrame (and in general), you may be blocking the browser from updating other things. Try to keep unnecessary work outside of your animation functions. For example, it may make sense for animation setup to happen in a timeout callback rather than a requestAnimationFrame callback, and likewise if you have a computationally heavy thing that will happen at the end of an animation. Though I think it’s certainly overkill for simple games, you may want to consider using Worker threads. It’s worth trying to batch similar operations, and to schedule them at a time when screen updates are unlikely to occur, or when such updates are of a more subtle nature. Modern console games, for example, tend to prioritise framerate during player movement and combat, but may prioritise image quality or physics detail when compromise to framerate and input response would be less noticeable.

    Measure performance

    One of the reasons I bring this topic up, is that there exist some popular animation-related libraries, or popular UI toolkits with animation functions, that still do things like using setTimeout to drive their animations, drive all their animations completely individually, or other similar things that aren’t conducive to maintaining a high frame-rate. One of the goals for my game Puzzowl is for it to be a solid 60fps on reasonable hardware (for the record, it’s almost there on Galaxy Nexus-class hardware) and playable on low-end (almost there on a Geeksphone Keon). I’d have liked to use as much third party software as possible, but most of what I tried was either too complicated for simple use-cases, or had performance issues on mobile.

    How I came to this conclusion is more important than the conclusion itself, however. To begin with, my priority was to write the code quickly to iterate on gameplay (and I’d certainly recommend doing this). I assumed that my own, naive code was making the game slower than I’d like. To an extent, this was true, I found plenty to optimise in my own code, but it go to the point where I knew what I was doing ought to perform quite well, and I still wasn’t quite there. At this point, I turned to the Firefox JavaScript profiler, and this told me almost exactly what low-hanging-fruit was left to address to improve performance. As it turned out, I suffered from some of the things I’ve mentioned in this post; my animation code had some corner cases where they could cause redraws to happen several times per frame, some of my animations caused Firefox to need to redraw everything (they were fine in other browsers, as it happens – that particular issue is now fixed), and some of the third party code I was using was poorly optimised.

    A take-away

    To help combat poor animation performance, I wrote Animator.js. It’s a simple animation library, and I’d like to think it’s efficient and easy to use. It’s heavily influenced by various parts of Clutter, but I’ve tried to avoid scope-creep. It does one thing, and it does it well (or adequately, at least). Animator.js is a fire-and-forget style animation library, designed to be used with games, or other situations where you need many, synchronised, custom animations. It includes a handful of built-in tweening functions, the facility to add your own, and helper functions for animating object properties. I use it to drive all the drawing updates and transitions in Puzzowl, by overriding its requestAnimationFrame function with a custom version that makes the request, but appends the game’s drawing function onto the end of the callback, like so:

    animator.requestAnimationFrame =
      function(callback) {
        requestAnimationFrame(function(t) {
          callback(t);
          redraw();
        });
     };

    My game’s redraw function does all drawing, and my animation callbacks just update state. When I request a redraw outside of animations, I just check the animator’s activeAnimations property first to stop from mistakenly drawing multiple times in a single animation frame. This gives me nice, synchronised animations at very low cost. Puzzowl isn’t out yet, but there’s a little screencast of it running on a Nexus 5:

    Alternative, low-framerate YouTube link.

  • Thursday, November 28, 2013 - 22:57
    Réunion auto hébergement de serveurs

    Compte rendu de la réunion

    Atelier / Présentation auto-hébergement de serveurs

    Jeudi 12 décembre à 20h30

    @/tmp/mpt
    30 rue de newburn
    94600 CHOISY LE ROI
    RER C Choisy / BUS 183

    Bonjour,

    Dans la série “autohébergement”, je voudrais… le serveur !

    Présentations de solutions réelles, du low voltage au rack qui tache, discussions sur les pourquoi et les comment de l’hébergement de serveur à la maison (ou ailleurs), de ses limites et de ses enjeux, présence de bons connaisseurs de ces questions, ouverture totale à tou-te-s ceux-celles qui veulent venir, quel que soit le niveau technique.

    Comme d’habitude nous avons ouvert un pad qu’il est libre de le consulter et d’y ajouter des informations et de les questions sur https://pads.usinette.org/p/auto-hebergement-serveurs

    Qui a déjà essayé ? Quels problèmes avez-vous rencontré ? Qu’est-ce qui vous semble essentiel à aborder / faire ? Quels sont les projets récents dans ce domaine qu’on connaîtrait pas ?

    Alban

  • Wednesday, November 27, 2013 - 15:42
    Hackito Ergo Sum 2014

    534960_10150678656020698_746345697_9565555_658038665_n-477x270

    Hackito Ergo Sum 2014 is being setup. Created by /tmp/lab and for hacking & security passionate researchers, HES 2014 will be the 5th edition of this conference.

    http://2014.hackitoergosum.org/

     

  • Thursday, November 7, 2013 - 09:13
    Elphel next camera – sample configuration

    With all three of the new boards for the NC393 series cameras assembled (but only partially tested) it is now possible to connect them with the existent components and show some possible configurations. Main applications of Elphel cameras are scientific research, system prototyping, proofs of concepts designs – areas that routinely require unique configurations, and this new camera series will continue tradition of high modularity.

    The camera boards look nothing like Lego blocks, but nevertheless they can zip together in different ways allowing to make new systems with minimal additional hardware. Elphel new design values our prior work (hardware development is still expensive) and provides compatibility with the existent modules, simultaneously enabling new features that were not previously possible, The most obvious example – sensor interface. The 10393 board is designed to accommodate our existent sensor front ends, custom flex cables of different lengths and shapes. That will help us to reduce the transition period to the new camera so we can focus on the high performance system board and port portions of the software and FPGA code, code that is already proven to work.

    The same camera sensor ports will allow us to use multi-lane serial sensor connections needed for the modern high speed and high resolution devices, but we will work on this only after the first part will be done and we will be able to replace our current systems with the new ones. Implementation of the serial sensor connection has some challenges for us because the used protocols are not open and we have to rely only on the pieces of the available information and some reverse-engineering and research. It is not the most fun work to do, but being an Open Hardware/ Free Software company we will not provide our users with semi-open documentation. Our users will always be able to rebuild all the binaries from the source code – same binaries from the same code we have access ourselves. The only NDA Elphel ever signed was with Kodak – that sensor NDA had clear expiration time, so at the moment we planned to start distributing our products (and so the source documentation) we were not be bound by it anymore.

    Sample configuration illustrated below combines the new and existent modules, the later have links to the design documentation on Elphel wiki. It is not so for the new boards (10393, 10385, 10389) – no circuit diagrams, parts lists or PCB layouts are publicly available when this post is being written. Hardware errors are usually much more expensive to fix, and we do not want somebody to duplicate our hardware “bugs” until we consider our products (“binaries”) to be good enough to go to our users. So while we set up public Git repository when we start software development, we publish our hardware documentation simultaneously with the start of the product distribution – together with “binaries”, not ahead of them.

    Sample configuration of the electronic modules of Elphel next camera family

    Sample configuration of the electronic modules of Elphel NC393 camera family

    • 1 – 10393 Multisensor camera system board based on Xilinx Zynq 7030 SoC.
    • 2 – 10385 Power supply board
    • 3 – 10389 Interface board
    • 4 – Inter-board power distribution: 6-pin (3 circuits) header on the 10385, receptacles on both 10393 and 10389
    • 5 – Inter-board signal connector: 40 pins (USB, SATA, GPIO)
    • 6 – mSATA SSD card
    • 7 – Processor heat sink (temporary). Production cameras will have custom heat spreader to transfer CPU/FPGA generated heat to the camera aluminum body or other heat sinks in multicamera systems
    • 8 – Ethernet (GigE) jack, РоЕ-compatible
    • 9 – DC power input (9-36V or 18-72V depending on application)
    • 10 – Memory card (can be used to boot the system for cold firmware update)
    • 11 – Micro USB B connector for system serial console with GPIO signals to select boot mode and generate system reset. Mounted on the 10393 system board
    • 12 – Micro USB A host connector for communication with external memory and I/O devices. Mounted on the 10389 interface board.
    • 13 – USB A/eSATA combo connector. eSATA port will be used for interfacing external storage devices (HDD, SSD) and downloading data from the camera internal SSD to the host computer. USB portion of the connector can provide power to the external device through the same cable as SATA data.
    • 14 – 2.5mm audio type connector for external synchronization input and output (opto-isolated and directly coupled)
    • 15,16,17 – directly connected sensor front ends. Compatible with the current 5MPix 10338 (shown) and other parallel data output sensors, programmable interface voltage. With the controlled impedance cables same ports will allow using up to 9 differential lanes plus I2C and 2 extra control signals.
    • 18,19,20 – sensor front ends connected through 2110359 multiplexer that allows simultaneous acquisition of images from up to 3 sensors into on-board SDRAM and then transferring them to the system board. In the future we will develop a faster multiplexer supporting serial links to the sensors and/or the system.
    • 22103695 – IMU adapter board, or other "granddaughter" extension board connected to the 10389 interface (daughter) board. Two 10-pin connectors provide 3.3V and 5.0V power, USB and 4 GPIO connected to the FPGA pads through high speed voltage level shifters
    • 23103696 – Serial GPS adapter board with 1pps input, uses another "granddaughter" port.
    • 24,25,26 – Inter-camera synchronization (daisy chain connection) for the systems with multiple camera boards located in the same enclosure, similar to the current Elphel Eyesis4pi cameras

    The setup shown above is a sort of mockup – while all the components are real, we do not yet have software to run it, even to test it. So there is no sense in powering up such a system – nothing will happen. And there is a lot to be done before we will be able even to completely test the new hardware and prepare and release revision “A” of each of the prototyped boards. We plan to be ready by the middle of 2014.

  • Sunday, November 3, 2013 - 06:04
    NC393 development progress – testing the hardware
    10393 board, memory side

    10393 board, memory side

    We received the first prototype of the 10393 rev.’0″ – the new camera system board with all the BGA chips mounted. It took a little longer as our PCB assembly manufacturer had to order solder paste stencils as some chips (DC-DC converter module in LGA package and QFN chips with central thermal pads) required more than just applying tacky flux and running them through the reflow oven. The photo shows the 10393 system board together with the 10385 power supply board that I assembled earlier while waiting for the main one. This time the power supply is a separate module so we’ll not need different system board versions for different power supply options as we do with Elphel current NC353.

    The shown prototype version has the full functionality, including РоЕ – feature that we will not offer in the production cameras to stay out of trouble with the patent trolls. As soon as the relevant patents will be ruled invalid we will be able to build such boards, but currently the cameras will be powered through the regular barrel-type DC jack or the 4-pin Molex connector in the multi-camera systems like Eyesis. 10385 also has a low-leakage (few microamps idle consumption) switch to use the battery-powered camera in remote locations, controlled by the system clock powered by a super-capacitor (not yet installed – there is an empty space with “+” sign on visible on the photo).

    10393 with 10385 board, SoC side

    10393 with 10385 board, SoC side

    I finalized the 10393 board assembly installing other components including couple hundred (bragging again) 0201 resistors and capacitors. Before starting I tested the resistance (lack of shorts) between the ground and power rails to make sure that I did not screw up pinouts during schematic/PCB design and so the board revision “0″ has a chance to be successfully tested. I repeated those tests while installing components as a power-to-ground shorts are rather difficult to locate as there are so many tiny capacitors between them.

    With assembly done the board was ready for the first “smoke” test – power it up while controlling the power consumption (I used a regular test bench power supply instead of the 10385 to provide the primary 3.3V power). I was turning power on for just a few seconds controlling the secondary voltages (1.0V, 1.8V and 1.5V) with the oscilloscope. After fixing a bad soldering on the intermediate “power good” pullup resistor (secondary voltages are supposed to come up in a prescribed sequence) all 3 of these voltages were up, measured OK and the board consumed 320 mA with the system reset released but no firmware to run. There are several additional DC-DC converters on board (5V for USB and 2 independently software-regulated voltages for the external boards (sensor front ends in most applications), but these converters are turned on by the software and I did not have any at the moment.

    10393 board, SoC side

    10393 board, SoC side

    Photos show the heat sink and a fan attached to aluminum angle, not directly to the Zynq chip. In production camera there will be a custom heat sink (no fan) between the 10393 and the optional 10389 interface/storage board, it will transfer processor heat to the camera aluminum body and the on-chip thermometer will be used to monitor the temperature and prevent overheating. Rather large temporary heat sink will be used during development (not to depend on the temperature monitoring software), thin angle part will allow to test the 10389 board that will nearly touch the other surface of the aluminum plate.

    The next thing to test was to make the CPU (Xilinx Zynq XC7Z030-1FBG484C) run and test the DDR3 memory. If this core of the system is operational, we can test the peripherals one by one, and failures in some of them would not prevent testing of the others. If the core would fail – we’ll have to try to find out (or just guess) the problem and redesign the board, order new ones, have new stencils, assemble and try again. Of course we’ll need to re-spin the board before the production units manufacturing, but I hoped that just the next revision will be good enough to go to the users, that changes will be small. I wrote “guessed”, because if the problems would be related to the DDR3 memory operation the means to troubleshoot them would be limited – the data and address/command lines are completely buried between the chips – memory is placed directly opposite to the Zynq SoC. There are no resistor terminations on the address/command lines, the DQ lines are swapped in each byte group and the byte groups are also swapped. I relied on Xilinx documentation that they OR-ed the data lines during write leveling, so the DQ swapping will not harm this functionality.

    Skipping the requirement for the address line termination allowed the overall design to be compact and the connections themselves to be really short (actually shorter than the lines inside the SoC chip itself). I used Micron documentation when considering such solution, but it still needed to be tested on the real board. Such component placement allowed me to make average length of the address/command traces 15.5mm, individual traces had to be shortened/extended to keep combined PCB delays and internal SoC pin delays the same for each address/command and for each member in the byte group for data. Internal DDR3 chip delays do not need to be considered as they are balanced inside the package. Data connections lengths (they are just peer-to-peer, no split for the two memory chips as for address/command lines) are even shorter – they average from 8.5mm to 14.5 mm for different byte groups.

    Additional challenge for the initial breathing life in this new board was that we did not have the proven code to run on it, something we had for the Avnet MicroZed board while developing the free software bootloader to replace the Xilinx proprietary one. So that was a real test for our code and I decided to never even try the proprietary one on the new system.

    The 10393 board has no LED (not to count 2 Ethernet jack ones, but they are controlled by the Ethernet PHY), so I temporary borrowed one GPIO signal from the MDIO bus (Ethernet PHY control) to be able to step through the boot process not relying on the serial console to be operational. I just put the LED there without any transistor, so the 1.8V-powered diode was really dim, but that was OK. And the serial output turned out to be alive immediately so there was no real need for that debug tool and I was able to remove those extra wires. The board got to U-Boot prompt immediately, but unfortunately – not every time. So I had to spend several days (one of them because of just the faulty micro-SD card that silently replaced one sector with garbage even when read back by the computer) figuring out the instability. I still do not understand exactly what is wrong (it happens when the relocated code switches the memory mapping and copies itself back to the low addresses), but just adding delay by copying that range twice resolved the issue, it turned out to be software-related one as it was present when running other (proven) boards also, not just with the 10393.

    The core of the system is now verified, automatic write leveling and the two other hardware-implemented memory training functions produce reasonable results and the delay settings seem to be rather forgiving. That confirms the PCB design and makes it possible to move forward with testing of the other peripherals and starting the FPGA part of the design.

    There are other urgent projects at Elphel I have to be involved now, so not yet working on the NC393 full time, but this makes really good news for us to pass the important test. Booting the new board with just the free software, no proprietary tools at all – it is also very encouraging. Xilinx just released the new version of the tools, the human-readable (html) part of the FSBL output looks even fancier than that of Ezynq, but I believe ours is still more convenient to work with – we made it for ourselves, and so for other developers (who are like us) too.

  • Saturday, November 2, 2013 - 09:11
    SlyBlog: Introducing OpenPhoenux Neo900

    openphoenux-logoThe latest device in the OpenPhoenux open hardware familiy is the Neo900, the first true successor to the Nokia N900. The Neo900 is a joint project of the Openmoko veteran Jörg Reisenweber and the creators of the GTA04/Letux2804 open hardware smartphone at Golden Delicious Computers. Furthermore, it is supported by the N900 Maemo5/Fremantle community, the Openmoko community and the OpenPhoenux community, who are working together to get closer to their common goal of providing an open hardware smartphone, which is able to run 100% free and open source software, while being independent of any big hardware manufacturer.

    OpenPhoenux Neo900

    OpenPhoenux Neo900

    With the big ecosystem of free and open Maemo5/Fermantle applications, the hacker friendly N900, which provides an excelent hardware keyboard, the variety of free operating systems of the Openmoko community (SHR, QtMoko, Replicant, …) and the experience in designing and producing open hardware devices of the OpenPhoenux community (e.g. GTA04), they want to bring the best of all worlds together in one single device, the Neo900.

    The Neo900 is meant to be an upgraded N900, with a newly designed and more powerfull motherboard, which is based upon the existing and tested OpenPhoenux GTA04 design. Together with the nice housing of the N900 (e.g. slider, hardware keyboard, big screen, …), this is trying to get “the hackers most beloved device”. In the same spirit of the OpenPhoenux community, which created unique cases for their GTA04 devices out of aluminium, wood or 3D printing, there is also an effort to build an aluminium housing for the N900, which might lead to personalized and self-produced cases for the Neo900 in the future and thus the independence of spare parts of N900 smartphones.

    n900-cover1 n900-cover2

    Due to the fact that the Neo900′s new motherboard is very similar to the GTA04, it is possible to reuse most of the low level software stack like development tools, the Bootloader and the Linux Kernel from the GTA04 project, with just minor modifications applied. This will speed up the software development process of this new open hardware platform a lot!

    To fund the development and prototyping of this new open hardware device, which is made in Germany, a crowdfunding campain has been started a few days ago, in order to collect 25.000€ (which is by now already halfway reached!). Depending on the outcome of this fundraising the project might be able to provide better hardware specs than the following minimum keyfeature set:

    • TI DM3730 CPU (OMAP3 ARM Cortex A8) with 1+ GHz
    • 512+ MB RAM, 1+ GB NAND flash, 32+ GB eMMC, Micro-SD-Reader
    • 3.75G module for UMTS/CDMA; 4G (LTE) optional
    • USB 2.0 OTG High Speed
    • GPS, WLAN, Bluetooth
    • Accelerometer, barometric Altimeter, Magnetometer, Gyroscope
    • support of N900 camera module

    DSC01773 DSC01774

    If you want to see the N900 to live on, help the independet open hardware community to succeed, or are looking for a new, hacker friendly smartphone, you should consider to support the fundraising with a donation. If you donate 100€ or more, your donation will also serve as a rebate for a finished device, once they are ready.

    [Update 2013-11-04] The goal of 25.000€ is now reached, less than a week after the fundraiser started! Thanks to everybody who donated and spread the word and thus helped to make that happen. If you want to qualify for the rebate on the finished device, it is still possible to donate.

    Let the OpenPhoenux fly on!

  • Friday, November 1, 2013 - 11:01
    openmoko-fr: Nouveau matériel annoncé : Neo900 / Développment du GTA04

    Bonjour tout le monde!

    Cela fait longtemps que je n'ai plus écrit sur ce blog, mais ce n'est pas pour autant que l'activité autour d'OpenMoko est morte.

    En effet, Radek a décidé que QtMoko était suffisamment stable pour baisser le rythme de développement et il a expliqué à la mailing-list de la communauté que, pour lui, sa distribution est surtout utile en attendant un port d'Android sur le GTA04.

    À propos d'Android justement, le projet Replicant a essayé de porter leur version d'Android sur le GTA04, mais ils ont eu des soucis avec le noyau qui a quelques incompatibilités avec Android. Comme il n'y a que deux développeurs chez Replicant et que le plus actif ne connaît pas assez le développement noyau pour porter Replicant sur le GTA04, il a été décidé d'attendre que le noyau soit utilisable avant de continuer les efforts.

    C'est pourquoi Golden Delicious est en train de suivre le développement noyau Linux à chaque RC (donc leurs développements sont actuellement fait sur la version 3.12), puisque Android essaie de fusionner avec le noyau Linux au fil des versions. Donc avec un peu de temps encore, j'espère que l'on pourra profiter de l'expertise noyau de Golden Delicious et l'expertise Android de Replicant, pour avoir enfin un port d'Android 4.x utilisable sur le GTA04 :)

    Sinon, Golden Delicious a décidé de ne pas baisser les bras pour faire du matériel "le plus libre possible" et propose un nouveau projet avec la communauté du Nokia N900 : le Neo900.

    Le but de ce projet est de profiter du développement du GTA04 pour raviver la communauté du N900 et de leur proposer du matériel un peu plus libre (le N900 a un OS libre Maemo, mais le matériel n'a pas été ouvert par Nokia). Ainsi, l'idée est de reformer la carte du GTA04 pour passer dans le boîtier du N900 et d'en profiter pour mettre un module LTE.

    Ne vous inquiétez pas pour la multiplication des projets du Golden Delicious : le but est d'utiliser le plus possible de chipset en commun pour pouvoir faire des commandes plus grosses que si un seul projet existait.

    Le plus enthousiasment dans ce projet est qu'il permet de réunir les différentes communautés open-source/libre autour du développement d'un même matériel et ainsi proposer un support encore meilleur aux utilisateurs.

    Qui dit nouveau projet, dit également financement : Golden Delicious a lancé sa campagne de don depuis le 30 octobre. N'hésitez pas à contribuer !

    Notez également que le premier pallier de 5'000€ pour débuter le développement à déjà était atteint hier et qu'au moment de l'écriture de cet article la campagne vient de passer au dessus des 10'000€.

    À bientôt!

    Trim

  • Tuesday, October 29, 2013 - 16:54
    Quadrotor copter with machine vision for contest

    This page gives brief overview of multirotor UAV platform called “Tau”, which is built specially for participating in flying robots contest which is established by russian Croc company. For now contest has only russian participants, probably because it was made for a first time.

    4

    Our team name was “Autonomous aerospace”. We are from Krasnoyarsk, 1M people city in Siberia. We had experience in UAV airplane development and manufacturing. We’ve grew up from student and postgraduate student university (SFU) scientific team to startup company.

    Doing contest machine we were not looking for easiest way of implementation. Some of the purposes are: further developing of our autopilot and getting experience of integrating machine vision functionality in real-time into control loop.

    During contest preparation we dealed for a first time with multyrotor platform . There was only airplanes autopiloting experience before. Adopting autopilot for quadrotor was not so obvious as we expected, but we succeded. Now proudly can say, that we built first quadrotor which calculates all the navigation and control math under QNX real-time operating system :) . At least no one did any crazy stuff like this before :)

    Mission

    Mission is to take off from start marker, follow simple maze toward finish marker, touch down within its contour and than fly back. Then landing on start marker and cutoff engines. On path to target random barrier is set. It can be moved by organizators across the wall and gate might be aligned at left, at right or anywhere between walls.

    p1.2_en

    Drone is allowed to touch walls, but not allowed to touch the ground.

    On-board UAV control system

    tau_en

    Computers

    Central control unit is autopilot AP-05 (AP). It has embedded inertial navigational system (INS), air data system (ADS), global navigational satellite systems GLONASS/GPS (GNSS). Computer in AP-05 – is ARM9 family processor with 400MHz clock frequency and 64 megabytes of RAM. Operation of computer is conducted under  QNX Neutrino real time operating system (RTOS) control. QNX is used under academic licence. Major point is implementation of navigational and control loop under QNX by separate processes: fnav for navigation, fcont for control. Loop frequency is 200 Hz.

    Decicions for flight in contest maze is made in autopilot by setting input values for roll, pitch and yaw PID regulators.
    Machine vision computer (MVC) is i.MX6Q SABRE lite board with 4 processors of Cortex-A9 archetecture. For the research of QNX technologies machine vision is also computed under QNX.
    Connection between AP and MVC is made by Ethernet via native qnet protocol.
    For the programmer is gives transparency and flexibility, all interprocess communication is unix-like locally or remotely by qnx messages. Local is conducted by kernel, remote by kernel+qnet.

     

    Sensors

    As a proximity sensors ultrasonic rangefinders SRF08 are used. They are mounded at bumper each for front, rear, left, right sides accordingly. Same sensor is used for altimetry. Sensors are connected to i.MX6Q SABRE lite (MVC) via I2C interface  to the same bus with different adresses. Doing altitude and wall navigation control loop over such a long way looks weird. All because AP doesn’t have external I2C due to its noise vulnerability. Process which polls range finders reflects data to the system by /dev/fsrf resource manager. Autopilot reads this data over qnet stack like /net/mvc/dev/fsrf file. After reading by navigational process range data is filtered and after reflected as feedback for altitude control and wall avoidance algorithm.

    When we were looking for camera main problem was making an software interface for OpenCV in QNX. Making port of webcam USB interface to QNX in a short time seemed impossible, because of lack of knowledge in that field.
    Thats why search for camera was narrowed only on IP cameras. Finally Elphel NC353L was found. It has several software interfaces for image: MJPEG over RSTP; HTTP. Camera has opened sources, so it seemed guaranteed way to make own low level protocol and image pre-processing.

    Also camera has multiply configurational parameters for optimizing real time picture. Additionally matrix has higher resolution, than other cameras in same price segment.
    With understanding that camera is open sourced we estimated our chances to miss appropriate solution as very low and this estimation was correct =).
    Calculation of machine vision algorithm is conducted by process called fmv, and its discrete results is represented at /dev/fmv resource manager.

     

    Machine vision

    Start finish markers search

    Searching for start/finish points is done by comparison of current image colour histograms with histograms of reference images. Histograms for B,R,G channels was compated accordingly, and then integral weighted estimation of similarity was calculated. Similarity is calculated separately for start and finish markers.

    Stereo vision

    For the barrier gate entrance we initially decided to implement stereo vision algorithms to determine its position. At the beginning of contest preparations width between walls on final approach to finish marker supposed to be 20 meters. It seemed challenging to find gate with 3m width. Thats why we decided to integrate Elphel NC353L solution. This version has multiplexor board, which simultaniously gather both sensor data to single image. Stereo camera was generously provided us by Elphel company to participate in contest.

    We had previously tested semi-global block matching algorithm (SGBM). Method gives disparity map from two images. Using SGBM method, requiers distortion remap and aligning preprocessing of input images. Using matrices of internal parameters of cameras we performed images rectification, so left image row coincides with rows of right image. Experimentally we tuned scene parameters and looked for optimal diversity map. Diversity map has same dimentions as input images, but consist of 16 bit depth values. Seeing on single row in the middle of image, selected by INS to fit horizon we recoverd distance to near objects and supposed to determine gate.

     

    Multicopter UAV Tau frame design

    Starting from the design…

    For compact setting of all required devices we decided to make central frame with 3 levels. Each level is milled carbon fiber plate.  1
     2 Level plates are fitted together by aluminium spacers. Between first and second levels there are carbon beams that are tighten between aluminium clamps.
    At the end of each beam motor is mounted using aluminium brackets. Motors are working with 12″ x 4.5 propellers. 3
    4 For the protection of propellers and equipment special bumper was made. 4 parts form closed perimeter. Bumper part has U-like cut and made of carbon 3 layer composite sandwich. Mounding of bumper is made by Г-like bracket, which is fixed at bottom of motor mount.
    After design process production and assembly started. Fristly carbon fiber plates and beams were baked. Parallely all aluminium parts were milled. On preparated plates we milled them on CNC. Then molds for bumper and brackets were milled. 5

    After all assembly started!
    In a five days we fit everything together and made wiring of all devices.
    Design of airframe in STEP format is freely avaiable: with all equipment and as plain frame.

     6  7  8

     

    Flight testing 

    When everything were done on assembly 10 days before contest begin left. Actually we had flight test platform before, so we started not from scratch in a flight software.

    Previous results were got on fiber glass strong frame before. Some explanations are made on russian in following videos:

    After contest drone assembly we spend 5 days to make it flight properly: maintain attitude and regulate distance from the walls.

    Next five days we spent to test all mission algorithm in a combination with machine vision and real markers. We’ve got some sucessful complete tests, but all system was very unstable. Most of the problems was about flying. A lot of time was eaten by i2c rangers problems: high current of motors and vibration were making contact and ground potential unstable, and it lead to bus stuck. When bus stuck, altimeter is also stucks, what was leading to engines turn off. Many thanks for our designers and all mechanical shop. In dozens of fallings we’ve once broke bumper braket, and one leg.

    Algorithm for maze flying is classical, keep right, keep distance from the walls and pray :) . We do not making turns, UAV maintains yaw, which is set on initial alignment. And it is aligned by rear side toward right direction at start. So it begins to fly backwards, than left, then front. And on a flight back – in reverse.

    Fly front means to hold distance from front wall. When wall is far, front ranger is saturated in its maximum value, so regulator moves drone forward, by tilting its pitch front.

     

    Contest video

    In a real contest (sizes were officially corrected) distance between final approach walls became 5 meters, so finding gate was not a such big problem anymore. So barier detection was made in autopilot by finite state machine. If front stereo camera (by one of its eye) have seen ellipse in front of it, that means we have passed the gate and must see marker soon by looking down camers. If no, we probably holding right now distance from the barrier wall and must move left.

    First attempt 

    It was failed because of improper finite state machine criterion for barrier avoidance. Drone thought that it has reached barier and next cycle it thought it has reached front wall at marker, didn’t find any markers and turned back.

     

    Second attempt

    Here we have our machine vision algorithm failed. Camera didn’t recognized landing marker, so drone tryed to find on the way back and it was dead end of algorithm.
    As always there were just a question of two days of debugging to make everything right :)

     

    Conclusion

    We have not completely succeeded, but we have not failed.
    Our team dramatically improved existed software and developed new direction – machine vision.
    That was great teamwork experience, that charged our team to handle further challenges.

     

    Update 30.10.2013:

    During posting this text, new contest was announced for a 2014. We going to  create new team of only students for doing new contest mission with already prepared machine. Now we have chance to get initial ideas realized.

  • Monday, October 28, 2013 - 10:22
    Chris Lord: Sabbatical Over

    Aww, my 8-week sabbatical is now over. I wish I had more time, but I feel I used it well and there are certainly lots of Firefox bugs I want to work on too, so perhaps it’s about that time now (also, it’s not that long till Christmas anyway!)

    So, what did I do on my sabbatical?

    As I mentioned in the previous post, I took the time off primarily to work on a game, and that’s pretty much what I did. Except, I ended up working on two games. After realising the scope for our first game was much larger than we’d reckoned for, we decided to work on a smaller puzzle game too. I had a prototype working in a day, then that same prototype rewritten because DOM is slow in another day, then it rewritten again in another day because it ends up, canvas isn’t particularly fast either. After that, it’s been polish and refinement; it still isn’t done, but it’s fun to play and there’s promise. We’re not sure what the long-term plan is for this, but I’d like to package it with a runtime and distribute it on the major mobile app-stores (it runs in every modern browser, IE included).

    The first project ended up being a first-person, rogue-like, dungeon crawler. None of those genres are known for being particularly brief or trivial games, so I’m not sure what we expected, but yes, it’s a lot of work. In this time, we’ve gotten our idea of the game a bit more solid, designed some interaction, worked on various bits of art (texture-sets, rough monsters) and have an engine that lets you walk around an area, pick things up and features deferred, per-pixel lighting. It doesn’t run very well on your average phone at the moment, and it has layout bugs in WebKit/Blink based browsers. IE11′s WebGL also isn’t complete enough to render it as it is, though I expect I could get a basic version of it working there. I’ve put this on the back-burner slightly to focus on smaller projects that can be demoed and completed in a reasonable time-frame, but I hope to have the time to return to it intermittently and gradually bring it up to the point where it’s recognisable as a game.

    You can read a short paragraph and see a screenshot of both of these games at our team website, or see a few more on our Twitter feed.

    What did I learn on my sabbatical?

    Well, despite what many people are pretty eager to say, the web really isn’t ready as a games platform. Or an app platform, in my humble opinion. You can get around the issues if you have a decent knowledge of how rendering engines are implemented and a reasonable grasp of debugging and profiling tools, but there are too many performance and layout bugs for it to be comfortable right now, considering the alternatives. While it isn’t ready, I can say that it’s going to be amazing when it is. You really can write an app that, with relatively little effort, will run everywhere. Between CSS media queries, viewport units and flexbox, you can finally, easily write a responsive layout that can be markedly different for desktop, tablet and phone, and CSS transitions and a little JavaScript give you great expressive power for UI animations. WebGL is good enough for writing most mobile games you see, if you can avoid jank caused by garbage collection and reflow. Technologies like CocoonJS makes this really easy to deploy too.

    Given how positive that all sounds, why isn’t it ready? These are the top bugs I encountered while working on some games (from a mobile specific viewpoint):

    WebGL cannot be relied upon

    WebGL has finally hit Chrome for Android release version, and has been enabled in Firefox and Opera for Android for ages now. The aforementioned CocoonJS lets you use it on iOS too, even. Availability isn’t the problem. The problem is that it frequently crashes the browser, or you frequently lose context, for no good reason. Changing the orientation of your phone, or resizing the browser on desktop has often caused the browser to crash in my testing. I’ve had lost contexts when my app is the only page running, no DOM manipulation is happening, no textures are being created or destroyed and the phone isn’t visibly busy with anything else. You can handle it, but having to recreate everything when this happens is not a great user experience. This happens frequently enough to be noticeable, and annoying. This seems to vary a lot per phone, but is not something I’ve experienced with native development at this scale.

    An aside, Chrome also has an odd bug that causes a security exception if you load an image (on the same domain), render it scaled into a canvas, then try to upload that canvas. This, unfortunately, means we can’t use WebGL on Chrome in our puzzle game.

    Canvas performance isn’t great

    Canvas ought to be enough for simple 2d games, and there are certainly lots of compelling demos about, but I find it’s near impossible to get 60fps, full-screen, full-resolution performance out of even quite simple cases, across browsers. Chrome has great canvas acceleration and Firefox has an accelerated canvas too (possibly Aurora+ only at the moment), and it does work, but not well enough that you can rely on it. My puzzle game uses canvas as a fallback renderer on mobile, when WebGL isn’t an option, but it has markedly worse performance.

    Porting to Chrome is a pain

    A bit controversial, and perhaps a pot/kettle situation coming from a Firefox developer, but it seems that if Chrome isn’t your primary target, you’re going to have fun porting to it later. I don’t want to get into specifics, but I’ve found that Chrome often lays out differently (and incorrectly, according to specification) when compared to Firefox and IE10+, especially when flexbox becomes involved. Its transform implementation is also quite buggy too, and often ignores set perspective. There’s also the small annoyance that some features that are unprefixed in other browsers are still prefixed in Chrome (animations, 3d transforms). I actually found Chrome to be more of a pain than IE. In modern IE (10+), things tend to either work, or not work. I had fewer situations where something purported to work, but was buggy or incorrectly implemented.

    Another aside, touch input in Chrome for Android has unacceptable latency and there doesn’t seem to be any way of working around it. No such issue in Firefox.

    Appcache is awful

    Uh, seriously. Who thought it was a good idea that appcache should work entirely independently of the browser cache? Because it isn’t a good idea. Took me a while to figure out that I have to change my server settings so that the browser won’t cache images/documents independently of appcache, breaking appcache updates. I tend to think that the most obvious and useful way for something to work should be how it works by default, and this is really not the case here.

    Aside, Firefox has a bug that means that any two pages that have the same appcache manifest will cause a browser crash when accessing the second page. This includes an installed version of an online page using the same manifest.

    CSS transitions/animations leak implementation details

    This is the most annoying one, and I’ll make sure to file bugs about this in Firefox at least. Because setting of style properties gets coalesced, animations often don’t run. Removing display:none from an element and setting a style class to run a transition on it won’t work unless you force a reflow in-between. Similarly, switching to one style class, then back again won’t cause the animation on the first style-class to re-run. This is the case at least in Firefox and Chrome, I’ve not tested in IE. I can’t believe that this behaviour is explicitly specified, and it’s certainly extremely unintuitive. There are plenty of articles that talk about working around this, I’m kind of amazed that we haven’t fixed this yet. I’m equally concerned about the bad habits that this encourages too.

    DOM rendering is slow

    One of the big strengths of HTML5 as an app platform is how expressive HTML/CSS are and how you can easily create user interfaces in it, visually tweak and debugging them. You would naturally want to use this in any app or game that you were developing for the web primarily. Except, at least for games, if you use the DOM for your UI, you are going to spend an awful lot of time profiling, tweaking and making seemingly irrelevant changes to your CSS to try and improve rendering speed. This is no good at all, in my opinion, as this is the big advantage that the web has over native development. If you’re using WebGL only, you may as well just develop a native app and port it to wherever you want it, because using WebGL doesn’t make cross-device testing any easier and it certainly introduces a performance penalty. On the other hand, if you have a simple game, or a UI-heavy game, the web makes that much easier to work on. The one exception to this seems to be IE, which has absolutely stellar rendering performance. Well done IE.

    This has been my experience with making web apps. Although those problems exist, when things come together, the result is quite beautiful. My puzzle game, though there are still browser-specific bugs to work around and performance issues to fix, works across varying size and specification of phone, in every major, modern browser. It even allows you to install it in Firefox as a dedicated app, or add it to your homescreen in iOS and Chrome beta. Being able to point someone to a URL to play a game, with no further requirement, and no limitation of distribution or questionable agreements to adheer to is a real game-changer. I love that the web fosters creativity and empowers the individual, despite the best efforts of various powers that be. We have work to do, but the future’s bright.

  • Wednesday, October 23, 2013 - 19:50
    Talpadk: 3D printing using Ninja Flex filament

    Yesterday I received some of the relatively new “Ninja Flex” filament sold by http://www.fennerdrives.com/ 

    As the internet doesn’t seem to overflow with print reviews / settings for it yet I decided to post some words about it.

    NinjaFlex Sapphire 1.75mm

    NinjaFlex Sapphire 1.75mm

    The Filament

    It is always difficult to measure a soft material but using my caliber I measured the diameter to be 1.75mm as it is supposed to.
    The filament also seems to be nice and round.

    I ordered the “sapphire” version of the filament, and it has a nice (mat) blue color  which turns glossy when printed.
    It is also slightly translucent when printed thinly.

    The filament is very flexible (I can tie a tight knot on it without it breaking)
    The filament is also elastic but not as much a a regular rubber band… perhaps 5-8 times harder if I should make a guess.

    The material is not known to me, but I strongly suspect it to be polyurethane (PUR) with a surface coating/treatment to make it less sticky.
    Fennerdrives already produces PUR belting  which have been used in 3D printing prior to this material appearing and due to the mat to glossy change.
    (Update: it has been confirmed that it is polyurethane)

    The Fennerdrives recommended settings are:

    Recommended extruder temperature: 210 – 225°C
    Recommended platform temperature: 30 – 40°C

    The filament isn’t exactly cheap I would say roughly 3x the cost of PLA/ABS including shipping compared to the cheap PLA/ABS I normally buy.
    Then again soft/specialty filaments doesn’t seem to come cheaply normally.
    (Actually a lot of the cost comes from the somewhat expensive USP shipping)

    Fennerdrives does ship both from the US and the UK, living in Denmark (inside the EU) this is a big plus for me.

    3D model for the rubber feet

    3D model for the rubber feet

    The test prints

    As I’m currently designing and building a tabletop CNC mill I thought that I might as well print some rubber feet for it.

    The print isn’t necessarily the simplest one to print due to the outwards sloping unsupported  walls.
    However the angle is quite close to vertical and wouldn’t normally be causing problems.

    The 3D model was created using FreeCAD which is my preferred open source CAD package.

    I used Slic3r for generating the G-code.

    And my printer is a RepRapPro Huxly which has a bowden extruder which might actually not be ideal for extruding a soft and springy filament.

    Print 1

    Was done using my regular PLA/ABS profile.

    I had to abort the very first attempt as the filament wasn’t printed continually.

    • I increased the extrude temperature from the low temp that felt right while manually extruding the filament
    • Reduced the speed using the M220 command
    • And upped the heat bed temperature to 85 deg C

    Much to my amazement the rubber foot actually printed sort of  okay.
    It was however sticking so hard to the “Kapton” tape that removing it actually pulled the tape off the print bed!

    Prints 1 though 4

    Prints 1 though 4

    Print 2

    I then tried to create a specific profile for printing the rubber filament.

    • Reduced the printing speeds to avoid having to scale them using the M220 command
    • Removed the “Kapton” tape as it had become wrinkled any way
    • Printed without having heat on the bare aluminium print bed.

    It printed with roughly the same quality at the first print but was very very easy to remove.

    Print 3

    I noticed that the hot end seemed quite “laggy” probably caused by the flexible nature of the filament and i therefore made some additional changes.

    • All print speeds were set to 15 mm/s to avoid having the extruder changing speed
    • Retract was disabled, again to keep a constant pressure in the hot end
    • “Skirt loops” was increased to 4, to give the hot end more time to build up a constant pressure.
    • Infill was reduced from 50% to 0% to see if the vibrations caused the surface defects
    • The hot bed was set to 40 deg C

    Just after starting the print I realized that setting infill to 0% would cause some parts to be printed in mid air with nothing supporting them from below.
    Out of curiosity I did however allow the print to continue.

    The printer managed to print the part despite the fact that is was “unprintable”…
    Also the surface finish was very satisfying.

    Due to the 0% infill the part was slightly softer as was to be expected

    Print 4

    I don’t like printing the impossible as it may or may not succeed I made one small change

    •  I changed the infill back to 50%

    I’m pleased to report that the surface finish seems to be just as good as before.

    Printer settings

    Please keep in mind that  printer settings varies from printer to printer and that the one described here may not be optimal even for my own printer.

    The following list is semi sorted by what “I think is probably the most important settings”

    • No retract
    • Uniform print speed (of 15 mm/s)
    • Multi loop skrit (4 loops)
    • Hot end temperature 240 deg C
    • Print bed temperature 40 deg C
    • Travel speed 100 mm/s
    • Extrusion width 0.5 mm with a 0.5 mm nozzle
    • First layer 50% (might actually be a bad idea)
    • Layer height 0.3 mm

    Again while reading this keep in mind that I haven’t played very much with the temperatures.

    I had some undocumented failures after print 1 where the extruder/hot end seemed to jam and I haven’t dared reducing the temperature again as I needed/wanted some functional prints.
    The problems may however be related to too fast extrusions, filament loading and or the filament being deformed by the retracts.

    My prints was stringing slightly internally lowering the temp may be able to reduce this…

     

    Edits

    • It has been confirmed by the friendly customer support at Fennerdrives that the material is actually polyurethane.
    • Even without any heat on the hotbed it still sticks very very well to “Kapton”

  • Thursday, October 10, 2013 - 10:53
    off

    Pcb

    Nomenclature

    Circuit imprimé Pcb
    microcontrôleur ATTINY85V10PU IC1 Attiny85 2
    support pour CI DIL 8 IC1 8pinsocket 2
    oscillateur 8 MHz X1 X1
    condensateur 220 µF C2 Condo220
    condensateur 0,1 µF C1 Condo2
    2 x résistances 10 kOhm R3, R2 10k
    2 x résistances 1 kOhm R1, R5 R1
    2 x LED infra rouge type 333-A LED2, LED3 Ir 333 A 3
    2 x LED infra rouge type 333C/H0/L10 LED1, LED4 Ir333 C^H0^L10 3
    LED verte 3mmgreenled 2
    Transistor PNP PN2907 Q5 2907 T 2
    Transistors NPN PN2222 Q1, Q2, Q3, Q4 2222 2
    connecteur ISP 6 broches ISP 6pinbox T
    bouton poussoir S1 6mmswitch
    Porte-batterie 2 x AA borne + et - 2aabatterypack

    Principe de fonctionnement

    Pcb2

    Le principe de fonctionnement du OFF est basé sur le TV-B-GONE de Mitch Altman plus d'info ici => http://learn.adafruit.com/tv-b-gone-kit

    Le principe est le suivant : les codes infrarouges de toutes (ou presque...) les télévisions sont stockés dans le microcontrôleur ATTINY85. Quand on appuie sur le bouton poussoir, le montage émet ces codes grâces aux 4 diodes infrarouges. La portée est de 40 m environ et le temps d'émission de tous les codes stockés est de 1m30 environ.

    Ce montage est composé de 3 grands blocs fonctionnels :

    1. Électronique programmable :

    Dès que les montages électroniques doivent accomplir des taches complexes, il est souhaitable de mettre en œuvre un microcontrôleur (un tout peut ordinateur avec un processeur, de la mémoire vive et un minuscule espace de stockage). Dans notre cas, ce microcontrôleur est un ATTINY85. Il est associé à un oscillateur (appelé également quartz) de 8 MHz afin d’obtenir un fonctionnement stable et précis. Enfin le connecteur 6 broches permet de programmer ce microcontrôleur

    2. Amplification :

    Le microcontrôleur n’est pas capable d’alimenter directement les diodes infrarouges. Dans ce cas on utilise en électronique des transistors. Le montage OFF utilise 2 étages de ces composants : Un étage primaire (2N2907) commandé par le microcontrôleur et un étage secondaire (4 x 2N2222) commandé par l'étage primaire

    3. Émission infrarouge

    L'émission des codes infrarouges est assuré par 4 LEDs infrarouges afin d’obtenir une grande portée.

    Guide de montage

    Composant coté OFF :

    Souder les résistances R2, R3 et R5
    Attention ! NE PAS SOUDER R1 AVANT D’AVOIR FAIT LA PROGRAMMATION DE L’ATTINY85 !!!!

    Souder le support de circuit intégré ; il y a une encoche sur ce support qui doit être aligné avec le dessin correspondant sur le circuit imprimé.

    Souder le condensateur C1

    Souder l'oscillateur X1

    Souder le bouton S1

    Souder la LED verte : la patte la plus longue doit être insérée dans le trou marqué d’un “+”

    Souder les LEDs IR (LED 1, LED 2, LED 3 et LED 4)
    Il faut que la patte la plus longue soit soudée à plat sur la pastille marqué +
    La patte la plus courte doit être soudée sur la pastille marquée “LED XX” (de l’autre coté…)

    Souder le transistor Q5 : il faut légèrement écarter les pattes extérieures pour que le composant entre en place.

    Souder les transistors Q1, Q2, Q3, Q4 :
    Il faut légèrement plier les pattes centrales de ces composants pour bien les faire entrer dans leurs emplacements…

    Souder le connecteur 6 broches : il y a une encoche sur le connecteur qui doit être aligné avec celle du dessin correspondant sur le circuit imprimé.

    Souder le condensateur C2 : attention ce composant est polarisé, il faut que la patte qui est repérée par une bande “-” soit dans le trou qui n’est pas marqué d’un “+” :-)

    Programmer le microcontrôleur

    les commandes pour AVRDUDE avec comme programmeur un arduino + ARDUINO as ISP sont : 

    avrdude -P COM10 -b 19200 -c avrisp -p attiny85 -U lfuse:w:0xfe:m -U hfuse:w:0xdf:m -U efuse:w:0xff:m
    avrdude -P COM10 -b 19200 -c avrisp -p attiny85 -U flash:w:tvbgone.hex

    Le fichier hex peut être trouvé ici => http://learn.adafruit.com/system/assets/assets/000/010/188/original/firmwarev12.zip

    Finir de souder

    Souder la résistance R1

    Tester

  • Thursday, October 10, 2013 - 10:52
    booster

    Pcb

    Le montage Booster est basé sur le design du joule thief.

    Nomenclature

    Circuit imprimé                                       Pcb
    Transistor BC337 T1 To92 2
    Résistance 1 kOhm R1 R1
    Bouton poussoir 6mmswitch
    Ferrite Ferrite
    LED blanche LED Led3
    1m de fil orange
    1m de fil gris
    1 support de pile Porte Pile

    Principe

    Le principe est le suivant :

    Schema2

    Pour permettre à la LED de s’allumer avec une bonne intensité, de l’énergie est accumulée dans la bobine (constituée par la ferrite et les fils de couleur). Une fois que la bobine a accumulé assez d’énergie, le couple LED + transistor décharge l’énergie stockée et la LED transforme cette énergie en lumière. Une fois ce cycle terminé, la bobine se recharge en énergie, recommençant ainsi un nouveau cycle. Ce fonctionnement est tellement rapide (plus de 100 fois par seconde) qu’il est invisible pour l’œil humain…

    Du coup la led blanche qui fonctionne avec un tension de 3,6 V classiquement peut être alimenté avec une pile de 1,5 V voir une pile 1,5 décharge (jusqu’à 0,3 V environ)

    Guide de montage

    1. Bobinage de la ferrite :

    Le but est de réaliser un bobinage de 20 spires (ou tours) pour chaque couleur de fil, il faut que le debut d'une bobine soit commun avec la fin de l'autre bibine (comme repéré par les point sur le schéma)

    2. Montage et soudure des composants du coté “Booster”: 

    a. le petit bouton poussoir, Il se situe sous le logo Tetalab
    b. la LED blanche : la patte la plus longue va dans le trou marqué d’un “+”. Ne pas enfoncer complètement la LED avant de la souder, il faut pouvoir tordre ses pattes pour l’orienter dans l’axe du montage

    3. Montage et soudure des composants du coté “Novela 2013”

    a. Le transistor : il faut légèrement plier la patte centrale de ce composant pour bien le faire entrer dans son emplacement.
    b. La résistance : dans le sens qu'il te plaira !
    c. Le support de pile : attention à la polarité

  • Thursday, October 3, 2013 - 00:36
    FPGA is for Freedom

    In this post I write about our current development, my first experience with Xilinx Zynq, and also try to summarize the 10+ years experience with Xilinx FPGA devices. It is a mixture of the admiration for their state of the art silicon devices and frustration caused by the software. Please excuse my sometimes harsh words and analogies – I really would like to see Xilinx prosper and acquire software vision that matches the freedom that Ross Freeman brought to developers of the electronic devices when he invented FPGA and started Xilinx.

    Before the new camera design started

    We planned to update our current line of cameras for some time – Elphel current model NC353 is in production for almost 7 years. Thanks to the Xilinx FPGA, it is possible to upgrade it long after it was built. In 2009 we developed the new system board, built a first unit and started working with it. This board was designed around new (in 2009) Xilinx Spartan 6 and Texas Instruments DaVinci processor. Memory and the CPU performance were increased, the board could support two sensors simultaneously (instead of just one in the older models), but in the 10373 camera system board I was not satisfied with the bandwidth of the datapath between the FPGA and the processor – it was enough for current sensors but in my opinion it did not have enough margin for the future sensor upgrades and we decided to put this project on hold and look for the better match between the CPU and FPGA.

    Later we heard the news about the coming Xilinx Zynq devices, but initial rumors indicated that it is very unlikely these chips will be supported by freeware development software. Luckily, that proved to be wrong and Xilinx announced that most of the devices (excluding only the largest XC7Z045) will be supported by the free for download WebPack. Zynq combines dual core ARM CPU (with a rich set of standard peripherals) and high performance FPGA on the same chip, so it should be an exact match for our purposes and intrinsically high bandwidth between CPU and FPGA – parameter that killed our NC373 camera before it was born.

    Impressed by Zynq when working on the board design

    The news was really exciting, and I was waiting impatiently for the new devices to become available and the free for download status of the required software to be confirmed – many of Elphel customers are developers and we can not force them to acquire software that would be more expensive than the hardware they purchase from us. By June 2013, when I was able to designate time for the full time work on the new project, both conditions were met and I started working on the circuit and PCB design. Zynq features looked very nice and documentation was quite sufficient to work on the design, it turned out to have some little but very convenient bonuses like decoupling capacitors embedded in the package – we use memory mounted on the opposite to the CPU side of the board so it is difficult to have short decoupling connections for both of them. High speed serializer/deserializer capability of virtually all of the I/O pins made it possible to have the dual-function sensor port connectors compatible with our current sensor front ends (SFE) with 12-16 bit parallel interface and capable of running serial links (such as multi-lane MIPI). Backward compatibility will reduce time before we’ll be able to start shipping NC393 cameras (and replace system boards in our Eyesis line of products), high-speed serial capability will allow cameras to keep up with new emerging high-performance sensors.

    Initially, I planned to have only 3 sensor ports: one GTX to implement SATA interface, some GPIOs for inter-camera synchronization and interfacing daughter-boards (similar to what we had on our 10369 interface board for the NC353 camera) and dedicated DDR3 memory. Yes, Zynq has really nice access from the PL (programmable logic – FPGA part of the chip) to the system memory, but it is still beneficial to have memory that is not shared with the CPU and has a specialized controller fine-tuned for image processing applications. And I thought I’d need 676-ball package to fit all external devices. But by carefully going through the documentation, I realized that with the flexible I/O banking of Zynq it is possible to fit everything needed in a significantly smaller 484-ball package and to have four (instead of just three) sensor ports.

     A small cloud on the horizon

    When working on the circuit design, I needed to make sure that the pins I designate for the DDR3 memory interface are valid – such interface implementation is rather challenging and there are multiple rules that have to be satisfied simultaneously. Even as we do not plan to use Xilinx stock memory controller in the camera, I thought that the software “wizard” that instantiates it in the design may be a good tool to verify the selected pinout – that’s all that I needed at this stage of the design. So I went ahead to install the software. During this process, I learned that to use freeware software (and I already explained why it is the only kind of the non-free software we can use for our products), I have to install mandatory component that transmits data from my computer to Xilinx. It is funny – being a free software/open hardware company, we post all our development files on Sourceforge, but they still prefer to dig in our “dirty laundry”. This was very unpleasant news, and the license agreement stated that, because of the nature of the Internet, they have no responsibility if any of the information they get from my computer will accidentally get to where it was not supposed to get to. OK, I decided, I’ll deal with it later when I’ll really need it to work on the FPGA design; for now, I just need to install it and try the memory controller generator, then after; uninstall the software (hopefully together with the spy agent).

    Unfortunately, as it often happens, the “wizard” turned out not to be smart enough, and it told me that the 16-bit wide DDR3 interface I needed will not fit. I did verify the rules stated in the documentation again, searched online information on questions and answers about similar cases – all confirmed that the capable Zynq silicon could handle the job, but the software “wizard” prohibited it. It is quite understandable that software programs have their limitations, but when the software pretending to be “smart” is inflexible, when it (as most of the non-free code) does not allow user to comment out (to disable/bypass) specific checks, it causes frustration. So this software tried to make Zynq look less capable than it actually is, and also tried to convince me that instead of the 484-ball package, I should use larger 676-ball one, leaving less room for other components. Larger package would be more expensive for our customers too, of course.

    So I just decided to move on with the circuit/PCB design regardless of my disagreement with the software – this development was described in the several previous blog posts.

    By the early August, the PCB design of the Zynq-based camera system board (together with the two companion boards) was finished. I went through all the design again trying to weed out as many design errors as I could, and later that month we released the files into production. While waiting for all the components to come and the PCB to be manufactured, I started to look at the first steps in the software development I will need to be able to verify the board design. I was expecting to take the U-boot files developed for existent Zynq-based evaluation boards and tweak them to match our hardware – a rather straightforward process I did before when breathing in life in other systems. So first make U-boot work, then – proceed with the Linux kernel – both “Linux” and “U-boot” were mentioned in the documentation so I was sure I understand the overall process. I was wrong.

    FSBL – a piece of proprietary code generated by the proprietary tools

    Of course I understand that it may take another ten years before Xilinx will realize that the combination of the “blank tape” idea of the FPGA that they pioneered with the “totalitarian” style of development tools software is not very efficient – I’ll get to this topic later in the post. At the moment I was just looking for the Open Embedded – based distribution for existent boards that I can modify for our hardware. Internet search revealed that I still have to use proprietary tools to generate the first stage boot loader (FSBL) – piece of code responsible for the hardware initialization. This code is launched by the RBL – embedded in the chip ROM boot loader and in its turn the FSBL (starting from the Zynq OCM – internal on-chip memory) initializes external DRAM, loads and launches U-boot. Then it is the U-boot’s responsibility to take it from there and load and pass control to GNU/Linux (in the sequence that interests us). Starting with U-boot, all the code is Free Software (under mandatory for this software GNU GPL license), but not the FSBL. OK, I thought – I’ll use the tools to generate a binary blob and we’ll distribute it with our cameras. Elphel users will be able to use just the free software to re-build the camera flash image as they want. Binary blobs are nasty, and Richard Stallman would likely refuse to deal with our cameras, but we are living in the real world and so need something to start with – we can try to replace that piece of code later.

    What I was not sure about was the legal status of such distribution, at least all the text files generated had Xilinx copyright and “all rights reserved” notices in the header. Funny thing is that they also have “this file is automatically generated” in the same header. To me “generated” sounds more like “created” than “copied” or “compiled” and I did not know that robots are already recognized as authors of the original works covered by the Copyright Law. So I asked this question on Xilinx forum but I was not able to get a clear answer to that question – can we redistribute FSBL custom-generated by Xilinx tools for our hardware?

    We did try to generate FSBL with the tools – I failed to install the software on my computer – probably because it had too old of a version of Kubuntu and there was a conflict between the libc6 on my system and the licensing software (this funny make-pretend licensing of freebies). Oleg was luckier than me – he has a current Kubuntu version, but his operating system was still not perfect and did not completely match the development tools. When he tried to re-assign MIO pins in the tools GUI – nothing seemed to happen. Later he discovered that it actually did change; it just did not show the changes. So when he pressed “Save” and opened the same page again, there were the new (modified) values there. A little trick, but it made possible to proceed with the tools.

    There are other things that I did not like in the recommended way of the FSBL generation. One is that though I usually prefer a nice GUI to the “black screen” of the command line interface, there are some definite limitations. I like GUI when it saves me from remembering a lot of commands and command options – it could be OK if I had to do my job in a relatively small area. But in a small company, we have to often switch from mechanical design to web development, Verilog code debugging, kernel drivers or image processing – all these activities have their specific tools. But GUI for new board configuration is not that useful according to my personal experience. A standard configuration file with many properly commented settings is more convenient. Configuring a new Zynq-based board for most developers is something they do not need to perform a dozen times a day – once a year is a more reasonable estimate. When you develop a new board you have to go through many manual steps: studying documentation, looking for the board components, and developing a circuit diagram and PCB layout. Going through a long list of settings, reading comments and optionally modifying some values is a very useful process for the new board, as it can help to avoid design errors that would be left unnoticed if you just clicked on several GUI buttons. Adding more configuration parameters to GUI is usually more expensive than just defining more configuration values, so more parameters are likely to be hard-coded in the software and so out of user control. Another problem of the GUI approach – I was concerned I would eventually hit a similar problem I already hit with the smart Memory Interface Generator I described above, the problem that was always a nightmare for me when I had to upgrade the FPGA development tools – new version often refused to compile the code that worked with the old version, changed the rules that are impossible to bypass. And as the code is closed, you do not have many options to tell the software that you are the boss, not it.

    Configuring Zynq hardware for a commercial evaluation board with GUI – it may look cool, but the configuration is mostly already defined by the board design, so each board can come with the board-specific long and boring (but nicely commented) configuration file.

     The Ezynq project

    Considering all these shortcomings of the use of the FSBL I decided to evaluate feasibility of bypassing this proprietary code completely. According to Xilinx documentation, it seemed possible, and we did not need all of the functionality of the FSBL and the FSBL generation software. We definitely do not need booting of the secret code (Zynq has elaborate hardware and software support for such feature); we also do not need to configure the FPGA portion (PL) until the system is running operating system (FSBL allows early configuration). Our plan was to add extra functionality (previously handled by FSBL) to U-boot itself so all the board configuration is done with #define CONFIG_* statements in the appropriate header files. To prevent conflict between the new parameters and already existent Zynq-related ones in U-boot name scope, we added ‘E’, starting all the parameters with “CONFIG_EZYNQ_” – this is where the project name came from. The project is available in Elphel Git repository at Sourceforge.

    For this unexpected project, we purchased a nice small MicroZed evaluation board (it is the first evaluation board I ever used in my career) so we had an official software that boots and runs on this board. Even implementation of the subset of the FSBL functionality, with configuration files ready for only one board, having several known (and probably plenty of unknown) bugs, took me a whole month of programming. In that process I had to go through the documentation on many of the Zynq peripherals and their control registers, DDR3 memory interface – that will likely help me when developing the software for the actual camera. While working on the reimplementation, I was comparing the generated FSBL output against documetation and noticed several mismatches between the two, but none seem to be critical. Our code will need some cleanup – at the beginning I did not know the exact details of what will be needed, and this is my first program in Python, but the program proved to work and we’ll maintain it and use it with future Elphel camera software distributions. I also believe that there are other developers who share my view that the best FPGA silicon on the planet deserves different software, software made for the developers – not just for the cool looking presentations. And we would like other developers to try this code, creating configuration files for the Zynq-based boards they have. There are more technical details in the README file in the git repository and we are always willing to answer questions about this program.

     Why I believe Xilinx will turn towards Free Software

    When Ross Freeman, FPGA inventor and one of the Xilinx founders, compared the new device with a “blank tape,” he defined the future of the new class of the devices; devices where the user, and not the chip manufacturer, is in full control. It would be like it was with the magnetic tapes where people could record whatever they liked, and not just what the record companies did. It was especially important in the USSR, where I was born – the most famous and loved by the Soviet people Russian singer, Vladimir Vysotsky, “lived” mostly on the magnetic tapes recorded by people against the will of the Soviet government. Magnetic tapes were the medium that brought us the Beatles – we loved them and perceived them as a “Band of Freedom.”

    Freedom is the intrinsic feature of the FPGA. I think it is better than “Field” for the first letter in the acronym. Unfortunately, the analogy with the “blank tape” does not go much farther – in the non-free country, we were free to use any brand of the tape recorder (domestic or brought from abroad) with the same tape. If the Soviet government had the same level of control over the recorders as the FPGA manufacturers have now over the required development tools, we would never be able to listen to Vysotsky or the Beatles.

    Some ten years ago, Wim Roelandts, then CEO of Xilinx, had a presentation in Salt Lake City that I attended. When answering questions, he said that more than 98 percent of the company revenue comes from the FPGA (“blank tape”) sales, and less than two percent from the software. Maybe the numbers have changed by now, but I do not think the difference is radical.

    I can only guess at what the rationale behind the idea of reducing the value of the main (98 percent) product for the questionable benefit of a two percent byproduct is. They probably can not believe that freedom may be monetized, it increases the value (and the lack of it – decreases) of the underlying product by more than those tiny two percent. They may think that it is irrelevant, and as they produce the best tape in the world, they should use it to the competitive advantage of their tape recorders.

    There is the other side of this. Totalitarianism is not competitive in the long run. The USSR was strong in the middle of the 20th century and was able to win against Hitler in WWII. Just 10 years before its collapse, I could not believe that any change would happen in my lifetime – but there is no more USSR now. In the end of the last century (and the beginning of this one), Microsoft was considered the most successful software company, a model for others. And I see some similarity between the two – trying to keep everybody under control – be it with the help of the KGB or EULA. Soviet people did not have private property (only so called “personal property”) – virtually everything belonged to the State. Same with the users of proprietary software – you do not own what you paid money for, you are just granted a temporary right to use it. Microsoft is far from over, of course, but it has seen better times, and few are considering it as a powerful Empire now. Yes, they still dominate on the desktops, but the same approach failed in the modern areas of the web and mobile devices. In these days you have to give more control to the users – or risk becoming irrelevant. Initially Apple tried hard to prevent “jail-breaking” and not to let people to install their own software. Yes, they sure still have a lot of control, but even they had to yield some of it under the pressure of the users and competitors. It is even more valid for the faster growing Linux-based Android devices.

    Xilinx itself is gradually migrating towards Free Software, at least for the code that runs on their devices. I believe this process is welcomed by Xilinx developers (who made a great job in coding Free software submitted to at least Linux kernel and U-boot) but is still not embraced completely by the management who (software-wise) got stuck in the 20th century, when the microsoviet type of the program was a model to follow. But this fight is an uphill battle, and they have to “surrender” more and more. Xilinx SDK is already based on Free Software Eclipse IDE and software components licensed under GNU GPL. I count on this trend and think that it will provide Xilinx with their own experience and prove to them that developing Free Software gives more value in return by expanding application areas and results in increased market share for the devices.

    But this shift to Free Software does not yet apply to the main part of the software tools – tools for the FPGA or programmable logic (PL) in terms of Zynq development.

    The Xilinx proprietary stronghold that still seems as stable as the USSR in early 1980-s is the FPGA development tools. They do not see much pressure to stop effectively crippling their hardware by the software because 1) Xilinx FPGAs are still the best and 2) Xilinx competitors cripple their products no less than Xilinx does itself. When I first started using reconfigurable FPGA in 2002, I was considering Altera too, but even their freebie software license had to be renewed each 3 months, so there was no guarantee that you’ll always be able to use the code you previously developed.

    Competition on the FPGA market is increasing, and in addition to the traditional Xilinx+Altera duopoly, new players are emerging, such as Achronics and Tabula. It seems to me, however, that their bet to beat duopoly is based on the sheer technological advantage of the Intel 14nm process, not on the developer-friendly software that can really make a difference in this field.

    Installation of the “spyware” as a mandatory component of the freeware FPGA development tools (in the paid-for versions this functionality may be disabled, but it is on by default) seems to be considered of high value – otherwise they would not risk alienating their loyal customers. Why do they do it? Probably in a desperate move to get more of the real life examples to improve their place and route and other related algorithms. I am not a specialist in these algorithms, but generally they are NP-hard and there are many approaches how to find good-enough solutions and improve them. And this involuntary feedback through the spyware is needed to train the algorithms being developed. Translated to USSR analogy, it would be as utopian as to assign 3 KGB agents to every citizen to find out what each of them wants and then decide in some centralized way how to make them all feel happy. Or Apple watching on the customer use of the phones to guess what they need and designing all the apps in-house that are currently available from the independent developers. Proprietary operating systems closed to developers and fully controlled by a single company already proved their inferiority on the mobile devices where they faced a real competition.

    Xilinx has a unique opportunity to change this unfortunate state. They develop, produce and sell the Real Things, and Xilinx can become as recognized in FPGA development software, as it is recognized for the FPGA devices now. They are in a position not just to invest heavily in the Free Software infrastructure as IBM and other companies do, but to do much more: jump-start and lead the new class of the FPGA development tools – tools where users are partners, not just the subjects of the surveillance. Starting and maintaining a framework of the Free (not freeware, like WebPack) tools could make a real difference and create value, like independently designed apps create value for Apple or Android gadgets. Just look around – it is the second decade of the 21st century, not the late 20th. Let users (and Xilinx users are really smart developers) get to the controls – they will innovate, and some may find solutions that would never come to the mind of Xilinx staff engineers.

    One may say that Xilinx already has an App Store equivalent, but the marketplace for IP cores (“vinyl records” that can be copied to the “magnetic tapes” under certain conditions) is not a substitute for the free and open FPGA development framework – users can exchange (under various free and non-free licenses, with or without compensation) their “tape records” themselves without any Xilinx involvement. In our current design, we too plan to use at least one Verilog module designed by others under GNU GPL license, and we will handle it between us and the developer directly. The other difference is that iPhone users are just phone users and the apps they download increase the functionality (and, in effect, the value) of the phone they purchase. When an FPGA developer uses a core designed by others – she just gets part of her job already done. But the increased functionality of the tools is still needed, and this functionality is usually related to much more elaborate activity than that of the casual phone app user, and FPGA developer is more likely to be able to contribute back. That does not mean, of course, that many developers will contribute new P/R algorithms, but evaluating different algorithms (including experimental ones), tweaking parameters of the goal functions – especially when the default setup can’t make it for the user - this is what many (myself included) can do. It is especially likely to happen if the users are provided with some meaningful comments on the nature of the algorithms and variable parameters.

    Such development framework will make it possible for independent researchers to experiment with the new methods of (for example) timing closure, and Xilinx will have different ways to encourage (and in some cases sponsor) such development that will require less investments than when everything critical is done in-house and behind the closed doors.

    When implemented, such an approach will provide multiple advantages:

    • Effectively increase the value of Xilinx silicon devices: unleash more of their power and hand it to the users. Such cases as I described above (MIG pushing me to use larger than actually needed package) will be eliminated – in my case I would just troubleshoot the MIG code for my case and submit suggested changes (I’m sure I’m not the only one who needs to use x16 DDR3 with Zynq in 484-ball package). And until the needed changes will be included in the main branch, others who need it will just be able to use my modified version.
    • Reduce the cost of the tools software development and increase its capability and quality by integrating Free Software tools (i.e. Icarus Verilog that we use ourselves for simulation of the products based on Xilinx FPGA) and user contributions. These contributions will be enabled by the open code of the software, and users will be more eager to get involved when they are treated as partners.
    • Improve customer relations. I’m sure that it’s not just me who hates the spyware planted on their computers. And Xilinx surely knows this too, so I consider the current state as a desperate measure to bring in the data that customers are reluctant to provide voluntarily. Treating users as partners (and they really should be partners as improvements of the software tools benefit both parties) is a better way to get the needed feedback (and even contributions, as users can do part of the work themselves) than the current model of interaction. Linux kernel gets on average five patches per hour from thousands of developers (Xilinx included) freely.

    Is there a risk that competitors will be able to benefit from this Free Software? Sure they will; as anybody else, they will be able to use it. But they will have to play by the same rules. Even if they will be able to copy all the software and adapt it to their products, keeping the code closed (only possible if the license will be weak enough to allow it), their non-free product will have lower value for the users even if the hardware alone has the same (or even higher) performance.

    I am not sure if Xilinx has another decade to stay with the old software paradigm, because as the performance and complexity of the FPGA is increasing, the quality of development software gets more important, and “quality” means the real quality for developers, not only the nice-looking interface. So if there will be some new player on the FPGA field that will be able to offer silicon lagging behind the front runners by some 3-4 years, but offering development environment based on Free Software – that company will definitely have a competitive advantage. If that will happen, I’ll go for the software, but I would definitely prefer to have the best of each – superior Xilinx FPGA devices supported by the developer-friendly, Free Software; the only software that matches the essense of the FPGA idea – its freedom.

  • Wednesday, October 2, 2013 - 07:34
    NC393 development progress – 3

    Just a small update – we received all the 3 boards ordered for the NC393 camera at Fastprint, China. We will have our contract manufacturer install the BGA chips, and then I’ll work again on the tiny 0201 components, like 4 years ago. I love to assemble such boards (but not too often) myself – going through all the components when they are real (not virtual) gives me a different perspective to think about the design.

    10393 System board, top side

    10393 System board, top side

    10389 Interface board, top side

    10389 Interface board, top side

    10385 Power supply board, top side

    10385 Power supply board, top side

    10393 System board, bottom side

    10393 System board, bottom side

    10389 Interface board, bottom side

    10389 Interface board, bottom side

    10385 Power supply board, bottom side

    10385 Power supply board, bottom side

  • Tuesday, September 24, 2013 - 17:13
    Xiangfu Liu: Install Xilinx(ISE 14.6) Platform Cable USB under Ubuntu 13.04 64bit

    Let’s make is simple
    I am using the Xilinx ISE 14.6. it will failed install the cable driver. we just ignore that error and doing those:

    sudo apt-get install fxload gitk git-gui build-essential libc6-dev-i386 ia32-libs
    cd /home/Xilinx #I like install them under /home
    sudo git clone git://git.zerfleddert.de/usb-driver
    cd usb-driver/
    sudo make lib32
    ./setup_pcusb /opt/Xilinx/13.2/ISE_DS/ISE/
    cd /lib/x86_64-linux-gnu/ && sudo ln -s libusb-0.1.so.4 libusb.so

    Links may help:

    1. http://www.george-smart.co.uk/wiki/Xilinx_JTAG_Linux#Download_the_driver_source
    2. http://forums.xilinx.com/t5/Installation-and-Licensing/ISE-11-2-Impact-can-t-find-USB-II-cable-SLED-11-Linux-64-bit/m-p/42064?query.id=386680#M467
  • Monday, September 23, 2013 - 12:27
    Xiangfu Liu: Btctele.com is improving

    我们在不断的提高 Btctelecom 的用户体验。从主页开始。这是一个候选的界面:https://dev.btctele.com/index2.php,如果大家有任何意见和建议。请在这里留言。

    Happy Btc + Telecom

  • Wednesday, September 18, 2013 - 12:02
    Heated Piezo for Jetting Wax (and other stuff)

    I'd just like to draw everyone's attention to this really nice RepRap heated (ink)jet head by Mike Alden, shown here printing wax.

    Details are on the RepRap Wiki here.

Pages