Life, Teams, and Software Engineering

Page 2 of 7

Selenium Server and Internet Explorer

Our team has started on a fairly large greenfield project for one of our clients, so we have the opportunity to decide what is good enough before getting too far along.  We chose Selenium as our browser testing automation framework because it is widely used and has a large support community.  That it’s free also doesn’t hurt, but sometimes that fact doesn’t help either.

We chose to use Selenium Server over WebDriver because it allows our tester, who has very little development background, to develop his tests using Selenium IDE, save the tests as Selenese HTML, and move on.  We started off exporting the test cases to C# and dealing with it that way, but it has some big issues that caused us to move away from it.  In no particular order:

  • the extra step was easy to forget
  • it was difficult to know if you were running the most up to date version of the test
  • there was no incentive to keep the original Selenese files around
  • the conversion was one-way
  • any helpers that we wrote in C# couldn’t be linked up during the export process
  • Most Importantly: there was no way to run the conversion process outside of the Selenium IDE
Add these all up and it’s just not worth the trouble.  So we’ve been running raw Selenese test cases with some custom user extensions against Selenium Server for the past month or so.  Here is a dump of what we’ve discovered regarding the current version of Selenium Standalone (2.28) and Internet Explorer 9.

Focus Is King

Selenium exhibits some strange quirks when executed inside Internet Explorer, most of which have to do with the execution of Javascript events on page elements.  The two I’ve encountered so far are ‘the annoying key press bug’ and ‘the annoying button click bug’.

The Annoying Key Press Bug

We all know Google’s auto-complete search box.  Here is a simple example that illustrates the problem by typing something into Google’s search textbox:

simpletest
open /
type q hello world
waitForElementPresent rcnt
verifyTextPresent Lady Antebellum
When run against Chrome or Firefox this test will work just fine, but run it against Internet Explorer (9 at least) and you’ll get a timeout:
simpletest
open /
type q hello world
waitForElementPresent rcnt Timed out
after
30000ms
verifyTextPresent Lady Antebellum false
Maddeningly no combination of keyDown, keyUp, keyPressed, typeKeys, sendKeys, etcetera, would make it work.

The Annoying Button Click Bug

Expanding on our last example, what happens if we click the “Google Search” button after typing into the search box?  Surely that will get us something, right?
simpletest
open /
type q hello world
click name=btnK
waitForElementPresent rcnt
verifyTextPresent Lady Antebellum
Wrong.
simpletest
open /
type q hello world
click name=btnK
waitForElementPresent rcnt Timed out
after
30000ms
verifyTextPresent Lady Antebellum false

So What Is Going On?

After fighting with this and discovering several workarounds I’ve come to the conclusion that the problem is Internet Explorer being overly paranoid.  Basically, Internet Explorer refuses to fire Javascript events on elements that do not currently have focus.  I don’t know if this is a security feature to protect against XSS or something else entirely, but there it is.  Going back to the first test case, let’s give the text box focus before we type into it:

simpletest
open /
focus q
type q hello world
waitForElementPresent rcnt
verifyTextPresent Lady Antebellum
Now the test succeeds against Internet Explorer!  Note that performing a click on a text box does not appear to be sufficient to give it focus, so you actually need to call the focus command.
simpletest
open /
focus q
type q hello world
waitForElementPresent rcnt
verifyTextPresent Lady Antebellum

And what about the click test?

simpletest
open /
type q hello world
focus name=btnK
click name=btnK
waitForElementPresent rcnt
verifyTextPresent Lady Antebellum

Not quite.

simpletest
open /
type q hello world
focus name=btnK
click name=btnK
waitForElementPresent rcnt Timed out
after
30000ms
verifyTextPresent Lady Antebellum false

So why not?  Turns out that I removed focus from the Internet Explorer window while the test was running. So if we do this:

simpletest
open /
type q hello world
windowFocus
focus name=btnK
click name=btnK
waitForElementPresent rcnt
verifyTextPresent Lady Antebellum

Victory!

simpletest
open /
type q hello world
windowFocus
focus name=btnK
click name=btnK
waitForElementPresent rcnt
verifyTextPresent Lady Antebellum

So how can we solve this without requiring 3 calls for each click/type we want to do in our system?  I added a user extension that deals with both the click and typing focus problems.
Any place you encounter this issue in Internet Explorer just replace type with compatibleType and click with compatibleClick and it should be good to go.  Now that I look at this again, I could probably completely override the built-in type and click functions in Selenium so no changes to the test scripts are necessary.  I’ll update the Gist and this post if I go that route.

Enhanced by Zemanta

Data Ownership in The Cloud

There has been a lot of conversation recently regarding the consequences of leaving your data in the cloud (see this), but it is hardly a new topic (this, and this). The thing that sets the Megaupload case described in the linked EFF article apart is that the Government, or at least this particular Agency, has opined that unless you store your data on infrastructure that you own yourself that it is no longer your property.  I’m not a lawyer and I’m not going to pretend to be one, but while their argument may have legal grounds depending on the terms and conditions of the services in question, it definitely has no moral ground and really none based on precedent already made in other property rights cases.

  • If I create something and attach it to an email in Gmail and send it to someone, does Google now own that?
  • If I’m an author and write my book on Google Docs, does Google own it?
  • If I store source code in Github does Github own it?  Github’s hosting provider?

To be fair, Google clearly states that data stored in Google Apps is not owned by them (here), but would that stop a Federal agency who decided there was something to be found?  I’d like to think so, but the world is far from perfect.

If the answer to any of the above questions is yes, then the answer to all of them must be yes, and vice versa.  It’s past time that the we explicitly define property rights as they pertain to content stored on the Internet.  Since Congress isn’t accomplishing much I think it falls to the service providers to make it very clear to their customers what content they own, and what content they don’t.  They must also decide that warrants will be required to gain access to user data (that isn’t already publicly available, ala Twitter or Blogger), rather than making “strategic” business decisions that allow Federal Agencies to access the data just to avoid possible legal headaches.  I have no evidence that this is actually happening, but I can’t imagine that such proposals have not been made or at least considered.

Residents who lease property are afforded the same Fourth Amendment protections on the leased premises as people who own their residence (lien or not).  Your landlord cannot give the authorities access while you are in good standing and even then not without a warrant.  It’s simply not their permission to give.  Should this same concept not transfer to services on the web?  You may not own the actual container in either case, but it doesn’t change the fact that you own the contents.

Coincidentally the first commenter on the third link above makes the same connection between data stored on the cloud and papers or other materials stored in your apartment.

Enhanced by Zemanta

Open Doors

I have left one great team at Booz Allen for another at Rough Stone Software.  In October I made the difficult decision to leave my team at Booz Allen for Rough Stone, a growing consulting company in Pittsburgh.  It will be a pretty big shift in technology from working primarily with native desktop applications in C and C++ to Web Development and new business domains in .NET, but I’ll manage.  At the very least I know the Booz Allen core values will stick with me.  I hope to have the opportunity to not only contribute technically but also to lead and help Rough Stone and our client’s businesses grow into the future.

At Rough Stone our mission is no different than what I’m used to: help our clients get value creating products to market as quickly as possible and with as little waste as possible.  Great partners are always working as if they will be replaced tomorrow.  This requires transparency, honesty, and a treatment of time (and by extension money) as if it was your own.  This is what separates great partners from someone just out to bill hours and even from other, just good, partners.  It goes against basic instincts to work this way but it’s a requirement in order to maintain good business relationships and build a brand centered not on just technical excellence, but integrity.

I hope that by having more regular access to technology like most people in the world that I’ll be able to write more regularly, if as nothing more than an escape.  I’ll probably end up going “off topic” more regularly to touch on other topics that interest me in order to post more regularly.  Then again, that has been said every year for the last several so only time will tell.

Stay tuned.

Book Recommendations: Testing

I got an email from one of my teammates yesterday asking me for some book recommendations on testing.  I’m glad that he didn’t just ask for books on unit testing, hopefully that means I’ve done a good job at emphasizing that it’s not all about unit testing.  Anyway, here’s my list of required reading on testing:

The Art of Unit Testing, Roy Osherove - At the time this was the only book to talk about the structure of unit tests and what made good tests versus bad tests.  To my knowledge this is still true and it’s #1 on my required reading list for developers.  The author sought to fill the gap between starting from scratch and the various works on Test-Driven development that were already out there.  This book doesn’t focus on TDD, it focuses on unit testing.  What makes a unit test, what makes one good or bad, and how to build suites of good ones.

Agile Testing: A Practical Guide for Testers and Agile Teams, Lisa Crispin - I’m a strong believer in breaking down the silos between the “development team” and the “testing team”, to the point where I hate hearing those designations.  For any team to be successful the entire team must be involved with everything from the beginning and this includes testing.  Nothing is done until it is tested.  This book shows that no matter what your designation is on the team that you have value to add to testing, and that testing early and often is necessary for long term success.

Growing Object-Oriented Software, Guided by Tests, Steve Freeman and Nat Pryce – This is the best book on Test-Driven Development I have read to date.  The examples and explanations of the whys and hows of test-first development, as well as the authors’ wisdom to focus on test-first at all levels (i.e. Acceptance Test-Driven Development combined with Unit Test-Driven Development) make a fantastic book to learn how to apply the test-first approach.  The authors focus on showing you that incrementally building your test suite while you add the functionality doesn’t have to be difficult or overwhelming.  It’s about repeating simple cycles of developing an acceptance test at the outer loop, watching it fail, then moving to the inner loop and developing a set of unit tests while implementing the actual functionality while working towards passing that acceptance test (and all the unit tests of course).

xUnit Test Patterns: Refactoring Test Code, Gerard Meszaros – I’ve gone through a great deal of this book, and while it has a wealth of information I honestly had a hard time finding places where some of the patterns could be applied to my test code.  That doesn’t make it a bad book by any stretch, however, and if you’re the kind of person that patterns speak to, this will help you find structure in what can sometimes feel like a daunting learning curve.  I find this as more of a helpful reference than required reading, but it’s still nice to have around.

Advanced Software Testing – Vol. 1: Guide to the ISTQB Advanced Certification as an Advanced Test Analyst, Rex Black – Despite the fact that this is a certification book, there is a lot of useful information in here about test analysis techniques.  Understanding these techniques and both where and how to apply them is necessary to be able to develop test suites that are “lean” in that they cover the necessary parts of your system without wasting effort through duplication or ineffective test cases.  The layout is a little chaotic, but if you can cut through the noise you’ll find the techniques indispensable.

Practice – Seriously.  This is the best way to learn the ins and outs, limitations and strengths of various tools in various language, as well as how to generally unit test various situations.  CUnit or Unity?  CppUnit or gtest?  NUnit or MBUnit?  CMock or mock-by-hand?  GMock or mock-by-hand?  RhinoMocks or Moq?  You can’t know to what situation each of these tools is best suited unless you’ve tried them (ideally on a real project).  But don’t try to master the tools, try to master the practice.  It will make you more valuable if you understand the foundations since you’ll be able to move between environments quickly with little ramp-up time.

There are no shortcuts to mastery, and as much as one would like to believe they’re special in the beginning (I’m sure I’ve been guilty of this), the 10000 hours can’t be faked.  Practice and you’ll get better at it, you’ll find the patterns, and you’ll be able to pick up most any tool like this and use it effectively.  But you need to develop the foundation to understand why you’re unit testing before it can evolve from just another mundane task to perform before something is “done” to being engrained in the very way you do your work.

Overtime as Failure

First, let me make myself clear.  This post isn’t focusing on voluntary overtime (though you should still consider the impact to your team, but that’s another post), it’s focusing on forced overtime in order to meet some deadline.  I consider this to be one of the worst kinds of failure.

Teams forced into this situation are typically faced with the worst of all possible choices.  Do we take shortcuts to get out on time and risk the quality issues that can come from that, or work long days and weekends, not have a life, and still likely introduce problems due to burnout?

It’s failure if you have to even ask this question, but let’s look at the choices anyway.

Keep Working 8 Hours

Hopefully this is the direction your management is leaning, but it comes with a cost.  In order to continue working at the same pace you must sacrifice scope or let the schedule slip.  Yes, personnel can change too, but I’m assuming that by the point you realize you need to make this decision that adding new people won’t get you to the finish line any faster.  Everyone must either accept the delay with the understanding that this is what is necessary to achieve the expected level of scope and quality, or strip out a few unfinished/untested features and move them to the next release (yes they should move to the top of your backlog).  This will let you release a working product of known quality without cutting corners.

Burn the Midnight Oil

Some people might not think this is so bad.  I think it’s one of the worst things you can do to your team.  What message are you sending when you’re forced into this situation?  To management?  To your team?  By deciding to work long hours, weekends, or even burning the midnight oil to put out the occasional fire (many of which were a direct result of those long hours and weekends) you’re sending the message to everyone involved that this is the new normal.
This sets a precedent and creates the expectation that all future situations like this will be addressed the same way.  Then the first time it doesn’t go this way (because you tried to do it the right way) you’ll catch flak about why your team is no longer dedicated to their job.  Do you really want to knowingly commit to 7-10s every few months?  Think of the children.

Evaluate Why

After you’ve made it across the finish line, you absolutely must evaluate how you got there.  How did you get into that situation to begin with?  Did the team commit to too much?  Did certain features take longer to implement or test than the team estimated?   Did the team not consider testing costs?  Were there a lot of unknown unknowns that crept in?  Maybe some other part of the process caused the schedule to drag?  You need to answer these questions in order to avoid this type of situation in the future.  

Don’t Go There

This is one of those cases where the best answer is to avoid asking the question in the first place.  Pull things that would normally be done at the end of a project earlier into the schedule.  Build your installer or deployment pipeline after the first couple weeks, produce system-level tests for each new feature you add (and make sure they pass before moving on!), do code reviews on a per-feature basis rather than monumental reviews at the end, etc.  The more frequently we do these things the better we get at them and it also has the welcome side effect of enabling us to be releasing almost all the time.
Remember, no matter what anyone tries to force you to do Quality Is Not An Option.  Be a professional and stand your ground.  This is a tough spot to be in, but burning your team out or torching the codebase with sketchy implementations will do more harm than good in the long run.
Comments are open.  I’m sure everyone has their own stories or advice to lend to this situation.  Let’s hear it!
« Older posts Newer posts »

Copyright © 2017 Life, Teams, and Software Engineering

Theme by Anders NorenUp ↑