Apparently, the word “viewport” comes from either spaceships or oil rigs. The first one, I could have guessed, but the second one not so much (this is the first thing to come to mind when I think of oil rigs). And while I can understand why the word viewport is used within the context of a web browser, I never liked it. It felt a bit too futuristic.

That aside, this post will dive into the research and product thinking that led us (here at Bit Complete) to build Viewport Tester, a web-based tool that can be used by frontend and fullstack developers super-quickly.


The friction of installable developer tools

There are some really great viewport and device testing tools out there, including Polypane and Blisk. These products work well, in that generally they accomplish the goals of the user: to be able to test a website or product on specific devices (and viewports or breakpoints). While they’re paid, they are affordable and reasonably priced, and are actively maintained. So then why did we build Viewport Tester?

A challenge we found with tools like those linked to above are that they need to be downloaded and installed. This means, that when a user initially runs into a problem related to viewport testing, and they seek a solution, the flow becomes this:

  1. Search online for a way to address the problem
  2. Land on one of these product websites
  3. Evaluate the product and its functionality
  4. Evaluate whether the product is secure/trustworthy enough to warrant access to your local machine
  5. Download the product
  6. Install the product
  7. Possibly deal with computer security settings that require the user to have permission to make changes

And here’s what a testing experience looks like:

  1. Open up the product
  2. Possibly be prompted for updates (which often limits the functionality entirely until an update has been download, installed and the product has been restarted)
  3. Use the product on your local machine.

In the first case, it may not seem like a huge deal: you download and install it once, and updates are rare. But the act of evaluating a product’s value, and navigating the installation and usage process, is not trivial. It can require a considerable amount of thought, and perhaps most importantly, can open up the user to security and privacy issues (particularly on company-issued computers).

In the second case, while there can be nuisance related to software-updates, it’s not as bad: you open it up, you use it, you shut it down. But during our research phase, we wondered how a user might be limited when using an offline-product like this (as opposed to so many products and services moving online). Where we landed was that while the offline-product carries with it a wider range of technical capabilities, it intrinsically limited the user experience. Here was our thinking:

  1. Users are used to doing the majority of their work in the browser. When they write some code and want to test it, they open their browser and trigger a reload. With offline-products, this becomes another product that the user needs to switch back and forth between.
  2. During QA flows, it’s important for users to quickly and easily be able to reference the device that is causing an issue. With an offline-product, this becomes more cumbersome (albeit, not impossible), and can force the receiving side of a bug or ticket to then have to download the same product in order to reproduce things.

Based on the two cases above, we thought that the friction related to an offline-product were high-enough to warrant exploring what a web-based product would be like.


Exploring (possible) technical limitations

Once we started to explore a web-based viewport testing product, as were confronted with the question of how do we accomplish this, technically? While the options available to us were limited (e.g. Virtual Machines, screenshot/screencast tools), we settled on using iframes to accomplish our product–goals.

Broadly, iframes are ways to embed websites within one another. They’re frequently used to serve ads, embed videos, or show tweets within news articles.

In our case, we needed to represent an entire webpage within Viewport Tester, and understand what the limits of this were. Without getting too technical (that’ll come soon), we realized that there were very-real constraints to what iframes could and could not do. Confronting this helped us understand why offline-products were as common as they were.

That being said, we believed the value proposition for users who wanted the least amount of friction to quickly test a responsive website was still high-enough to build a product to accomplish our goals.

Below are some of the limitations we ran into:

  1. Iframe content cannot be modified, only created or destroyed. This means that each time we set up a new iframe to represent a different device or webpage, we need to create a new one. And while this is trivial to do with code, it presents a number of performance and memory issues.
  2. Iframes have strict security policies, which prevent the “parents” from being able to modify them. This limits how much a web-based viewport testing tool can manipulate the iframes.


Keeping in mind the above limitations, here are some of the challenges we ran into:

  1. Resizing iframes to represent the exact dimensions of a device was tricky: it involved a lot of math, and some quirky CSS properties.
  2. Facilitating communication between iframes brought with it a number of issues, first and foremost being what’s commonly referred to as a “race condition”. This is when you need a series of events to happen in a specific order, but due to factors out of your control, these events happen out of order.

After we spent time understanding the limitations, we proceeded to create a proof-of-concept (POC) to determine if the challenges (including those listed above) could be overcome well-enough that the product provided clear value.

Luckily, that’s where we landed ;)


The product process

Believe it or not, this section will be shorter than the previous two 🤣The main reason is that we’ll be publishing a more technical-focused blog post in the coming weeks. So what we want to highlight here is how we moved through the product process.

Cumulatively, our team has over 250 years of development experience, so from the outset we knew development and QA wouldn’t be our biggest challenge. Rather, the product process was what we first needed to get right. Here’s how we approached things:

  1. Discussed the problem internally, and how common it is
  2. Began defining what a possible solution could look like
  3. Researched what existing solutions existed
  4. Discussed our findings internally
  5. Scoped out the product in a PRD
  6. Discussed our approach internally
  7. Moved ahead with a POC to determine the technological viability of the product
  8. Discussed our approach internally
  9. Began development

One thing we hope you’ll notice in the above list is how often we returned to discussing the product before development began. This was to ensure that the concept was aligned well with a problem we understood.


Open-sourcing our data

As a group of 25+ developers at Bit Complete, we have a lot of combined experience with (and owe a lot to) open source software. While we were scoping out the product, we realized that sharing the data around viewports, devices and breakpoints would be a natural way for us to contribute.

Therefore during the development-research phase, we ensured that the data we collected of the 180+ devices and viewports could be accessed by others (see our repository here). We’ve open sourced this data (under an MIT license), with a focus on the following:

  • Unique IDs for viewports
  • Names & labels for viewports
  • Ranking details (e.g. how popular they are)
  • The dimensions (of both the screen and the viewport, which differs by a factor of the device pixel density)
  • Metadata related to the release date and platform


Our next steps

After launching a product, one thing we find helpful is to take a break. The product, development, QA and launch process forces a focus on details, and by doing so, it can be challenging to view the high-level. By taking a break (albeit, one where we keep our eye on bugs and possible server issues), it allows us to monitor our analytics to understand how our users are using the product, and then to define our next-steps based on this real world data.

With Viewport Tester specifically, we’ve decided our next steps include:

  • Abstracting out the product such that we can reuse the UI and UX to compare other developer-focused content (e.g. CSS Frameworks and Rich Text Editors)
  • Revisiting some features (including orientation changing) that we pushed to post-launch
  • Ensuring the viewport data we’ve open-sourced is up to date
  • Investing more time in SEO and page-load times


If you have any questions about our process or other products we’ve got in the pipeline for Labs, please reach out: [email protected]