Reid Burke

  • Yeti at YUIConf

    Markandey Singh posted a short video of my YUIConf 2012 talk. Dav Glass is seen running around with a camera to show the audience YUI animation tests running with Yeti on various devices. Dav shipped 5 tablets, 1 phone, and an AirPort Extreme to California to make this demo happen.

    The Write Code That Works talk demonstrated Yeti in the context of software testing’s purpose. I also presented a few approaches for testing efficiently.

    After YUIConf, I landed a pull request to add Mocha, Jasmine, and full QUnit support to Yeti, making it more useful since this video. Thanks to Ryan Seddon for making that happen!

    The full session video will be available in the upcoming weeks. In the meantime, check out the slides or the Write Code That Works blog post which was the basis for the talk.

  • Write Code That Works

    Dav Glass and I visited the Yammer office in San Francisco this week to discuss build & test tools we use at YUI.

    We showed off Shifter for building YUI, Grover for testing headlessly, Yeti for testing in browsers, and Istanbul for statement-level code coverage. We use Travis for running most of this stuff in public CI. We now require 70% statement coverage before a new YUI module is allowed to land in the core and nobody can commit when the Travis or internal CI build is broken, unless the commit fixes the build.

    This is all very impressive. But @mde was quick to notice that we didn’t drop everything to get to this point—before diving in, you first need to prioritize what you work on. I couldn’t agree more.

    When you’re starting from scratch, you start to love the metrics. Green dots for passing builds. Green coverage reports when you hit 80% of your statements. The increasing number of passing tests. I’m all for having good code coverage, but before you go crazy, you should be careful that you don’t start writing tests for the wrong part of your code.

    Your code is not uniform

    Your code has various levels of quality starting at the first commit you make. You will write some code that’ll last for weeks or months, and some code that’ll need a rewrite next week. You need to embrace this kind of change and understand where it happens in your project.

    Node.js solves this problem quite well with the notion of a Stability Index.

    Throughout the documentation, you will see indications of a section’s stability. The Node.js API is still somewhat changing, and as it matures, certain parts are more reliable than others. Some are so proven, and so relied upon, that they are unlikely to ever change at all. Others are brand new and experimental, or known to be hazardous and in the process of being redesigned. The Stability Index ranges from Deprecated (0) and Experimental (1) for unstable code to Locked (5) for code that only changes for the most serious bugs.

    It’s a good idea for any post-1.0 project to assign a Stability Index to the APIs in your own code. Not only is it a clear message to those using your APIs, but it’s also a clear message for your team. It tells you where you should—and shouldn’t—write tests.

    More stable, more tests

    If you write tests like they cost nothing, you’re going to find yourself writing tests instead of writing code that works.

    Kent Beck’s wisdom says it best:

    I get paid for code that works, not for tests, so my philosophy is to test as little as possible to reach a given level of confidence (I suspect this level of confidence is high compared to industry standards, but that could just be hubris). If I don’t typically make a kind of mistake (like setting the wrong variables in a constructor), I don’t test for it. This answer is good and I’ll take it a step further: You should prioritize writing tests for parts of your code with a higher Stability Index. Especially if you’re just starting on a new project.

    If you’re writing tests for code that’s rapidly changing, you’re going to spend more of your time writing tests instead of shipping features. For code that’s brand new, I typically only test-first a small amount of code and wait a while before hitting that green bar on a code coverage report.

    Don’t get sucked into the allure of metrics too early. Remember what your job is: writing code that works. Code coverage and good testing tools are very important, but not if they get in the way of building what you’re supposed to build.

  • Best Trumps Easy

    I work at Yahoo!, building open source software. I build Yeti, but I work alongside the team building YUI. The engineers who built this team, and continue to work on this team, are the reason I have chosen to stay at Yahoo! building Yeti: they choose what’s best over what’s easiest.

    You work with people too. You know your people are not perfect, but they’re going somewhere great, which is why you’ve decided to join them.

    I’m going to tell you that building software is very hard, but the most challenging part of my job isn’t building code. It’s building people.

    Your people are not a test framework or a programming language: they are human beings. Like your favorite code, they don’t always meet your expectations. They will let you down.

    People tell stories. What you do next will define the stories they tell.

    You probably rely on these people, so when they fail you, it’s going to affect you. A lot. You probably don’t deserve to be subjected to their actions.

    It’s easy to react.

    I’m telling you, the best people never settle on what’s easy.

    The best people never coddle or spin. They’re honest and speak their mind, but only after giving it 5 minutes. They think instead of react.

    The best people deliver criticism inwardly, one-on-one, to the person who needs it. The attitude is service and respect.

    The best people will embrace the opportunity to be a mentor instead of the opportunity to stand up for what they deserve.

    Next time your people don’t meet your expectations, I encourage you to see an opportunity to invest in people. It will be hard. It may take up a lot of your time and nobody but them will appreciate your investment. Yet serving others this way will reward you.

    You never know when you’ll need it yourself.

    It’ll also reward them. Honest feedback delivered this way is very desirable and mutually beneficial.

    If you want this kind of culture in your team or community, I’d encourage you to be the first one to start. Give more than they deserve, seek to understand what they care about, and after careful consideration, deliver your feedback to them personally.

    It’ll be the start of a conversation you won’t regret.

  • Yeti & YUI

    If you’re a YUI core developer, you should be using Yeti. Here’s how to get started.

    Yeti runs JavaScript tests in any browser. With Yeti, you capture browsers once, then submit tests to those captured browsers during the day. Yeti takes care of running all of your tests in every browser you throw at it.

    Yeti works best with modern browsers.

    Make sure you have a recent Node.js then grab the latest Yeti:

    npm install -g http://latest.yeti.cx
    

    Daily setup

    Open a new Terminal tab, cd to your YUI source, then start the Yeti server.

    cd path/to/yui3
    yeti -s
    Yeti Hub started. LAN: http://10.1.1.10:9000
                      Local: http://localhost:9000
    

    Now you’re set. Navigate your local browsers to the local link and your browsers elsewhere on your LAN to the LAN link.

    It’s important that you run this from the yui3 directory and not inside the src or build directories. That’s because Yeti’s server will only serve files in the current directory, so if you started it inside src your tests wouldn’t be able to load ../build files like the YUI seed.

    Optional: Tunnel out

    Using Localtunnel, you can easily share a Yeti Hub with browsers outside your firewall.

    gem install localtunnel
    localtunnel 9000
    

    Use the URL you get from localtunnel to connect more browsers.

    Run your tests

    This could not be easier.

    cd path/to/yui3/src/your-component
    yeti tests/unit/*.html
    

    You’ll get test feedback right away. Feel free to abort with Ctrl-C and your browsers will reset for the next run automatically.

    Easy coverage

    Would you like to see code coverage, too?

    yeti --query 'filter=coverage' tests/unit/*.html
    

    Now you have line coverage in your output.

    Use someone else’s Hub on your network

    If someone else already has browsers setup on a Hub, you can easily use their Hub by giving Yeti the Hub’s URL. Here’s an example.

    yeti --hub http://10.1.1.10:9000 tests/unit/*.html
    

    If you started a Hub, share the LAN link with others on your network and have them use the --hub option with that URL.

    This magic happens using HTTP upgrades, so simple proxies like Localtunnel or some Node.js cloud hosting providers won’t work for Hub sharing because they don’t handle these kinds of connections. Look for services that support WebSockets.

    Run everything

    Every automated test in the project can be submitted. This will take a while.

    cd path/to/yui3
    yeti src/**/tests/unit/*.html
    

    More coming soon

    I’m making this easier every day. If you’re annoyed by something, mention @reid on Twitter or discuss Yeti on the YUI Library forums. You can also file a bug against Yeti.

    Yeti does not behave very well on older browsers. I am currently working on an all-new frontend that will be much more robust with error handling and reconnection. To keep up with what’s new, subscribe to the official Yeti blog where I highlight features and fixes for every Yeti release.

  • Ryan Grove: Why I believe in YUI

    Ryan Grove has posted a follow-up on his YUI from the outside post from last week.

    My blog post on Friday stemmed from my frustration at how it feels to be an outsider wanting to contribute to YUI— frustration that I probably wouldn’t feel so acutely if I hadn’t had the experience of contributing to YUI from the inside. I wanted to be frank, but I also wanted my criticism to be constructive, which is why I suggested solutions instead of just complaining.

    It wasn’t my intent to be discouraging, and I think I probably should have waited a while and written that post from a place of less frustration. I stand by what I wrote, but I regret that it was couched in negative emotions. I owe the YUI team an apology for that.

    Ryan also notes how we’re already addressing his concerns. I’m looking forward to continuing this in the weeks ahead.