Innovation @ BBG » Data Fri, 20 Nov 2015 18:47:05 +0000 en-US hourly 1 ODDI Demo Day Kicks Off the New Year Mon, 13 Jan 2014 18:41:16 +0000 Erica Malouf Friday marked the last demo day at ODDI in the current format. In the past, project owners (team leaders) have given demo day presentations in ODDI’s office with an occasional note from a team member. From here on out, the emphasis will be on having the team members take the lead in talking about their work, instead of the project owners.

In the past, stakeholders from within BBG have always been invited, but have rarely joined–it’s usually just ODDI staff who attend the Friday demos. Going forward, ODDI demo days will be centered around the stakeholder. Teams will schedule time with their various project stakeholders within BBG. The goal is to get internal customer feedback on a more regular basis as a part of our Agile, iterative approach.

ODDI scrum master Son Tran says that team-driven presentations provide team members with an opportunity to show that they are delivering on goals and owning the work they’ve done. He also notes it’s about the iterative process:  ”Closing the feedback loop and making it shorter is better for improving projects.”

What are we working on at ODDI?

For most teams, Sprint Zero was a time of research and planning, defining goals and determining KPIs. Adam Martin, our newly minted Director of Innovation, asked teams to come to the January 9 demo day “prepared to discuss their Charter as described in the Strategic White Paper, their shared vision in response to the Charter, the team’s goals, how they will measure their success against those goals, and their product(s) roadmap for Q2 of FY14 (and beyond if available).”

Now the teams are ready to see their brilliant ideas into fruition. And some teams are also managing ongoing projects like Relay, RIVR, the BBG-wide analytics roll out, and mobile app updates.

Here’s a look at what’s happening:

image mobileprez

Will Sullivan presents on the latest mobile app updates and the Symbian launch.


Project Owner: Will Sullivan

The Mobile Team is continuing develop, update and support the suite of umbrella news applications for all BBG entities, which supports more than 82 language services now, and has an install base of more than 400,000 users. We are launching new applications with Radio Free Asia (RFA) on Google Android and Apple IOS for both mobile and tablet form factors and just launched VOA’s Africa-focused Symbian application (the third largest mobile OS in the region, after Android and IOS, which we launched for VOA services last year). This quarter we will be updating the entire suite to a more magazine-style iPad design, building new Android home screen news widgets and moving the app analytics over to the shared Adobe Omniture SiteCatalyst system. We’re also beginning work on an live audio streaming and on-demand podcast Android and IOS application for the Middle East Broadcast Network’s Radio Sawa that is visually-rich with a touch-centered interaction experience and deep user-generated and social sharing integration.


Project Owner: Doug Zabransky

The Affiliate Digital Services (ADS) team represents a new chapter for USIM and affiliate relationships. Existing and new BBG Affiliates will be offered up to three tiers of digital service. Each tier represents levels of digital-hosted offerings including live streaming, adaptive html 5 digital players, and an internet broadcast station which will allow for content source switching between BBG live and on-demand content, as well as other affiliate content within the ADS community. All tiers include customer service and support.

Essentially, BBG hopes to build a robust network of affiliate partner on-line stations. Growing the BBG affiliate digital audience will grow BBG’s audience as well.


Project Owner: Rebecca Shakespeare

The insights team is focusing on setting up tools that collect and present objective information about digital performance to inform BBG leadership and editorial about what is actually happening with their digital products and content. The team is currently focusing on the rollout of the new web analytics tool which measures digital properties owned and hosted by the BBG. It is also contracting outside validation of the numbers that are collected and reported to ensure accuracy of the information presented. Beginning in February 2014, the team will start to focus on displaying weekly performance analytics from BBG’s range of digital reporting tools, side-by-side in a dashboard, to present a complete picture of digital performance.


Screen shot 2014-01-13 at 12.30.53 PM

Brian Williamson’s illustrations illuminate the Storytelling Team’s vision


Project Owners: Steve Fuchs, Randy Abramson

The storytelling team is determined to revitalize and update USIM storytelling around the globe. We are brainstorming innovative ways to tell stories that inform, engage and connect with audiences based on their needs and expectations. One of our main goals is to build community engagement with younger audiences by using a toolbox of highly relevant, visual, trans-media storytelling techniques. We plan to not only count standard metrics–such as time spent, return visits, videos watched, social engagement, and so forth–but we also aim to make a real-world impact that affects conversation and behavior. Randy will continue to work on Relay, and the entire team will work on projects like finding innovative ways to cover sports in developing countries, among others.

Other Teams & Projects

In addition to the teams that demo’d last Friday, ODDI also has several other teams that are kicking A and taking names.

The Research & Analysis (R&A) team functions as support for all other teams. R&A was recently pivotal in helping the Storytelling team and the Affiliate Digital Services team determine their next projects. During Sprint Zero, the R&A team dug deep to find data on countries around the world, interviewing internal experts and BBG’s Regional Marketing Officers, diving into BBG research reports and library databases, and translating that data into insights and strategic recommendations. The R&A team includes Son Tran, Ashley Wellman, Yousef Kokcha, Ahran Lee and myself (Erica Malouf).

image RIVR screen

Ongoing Project: Doug Zabransky will continue to lead the IVR project called RIVR. Look for a blog post update to come soon.

ODDI also has various teams working on ebooks, UX testing and more. Follow the action here on the blog, on Twitter (@BBGinnovate) and on our new website portal (

]]> 1
Golden Age of Journalism, Part II – Speed & Accuracy Tue, 19 Nov 2013 16:21:26 +0000 robbole Speed

The increased speed in which news organizations gather and publish content is one of the most notable changes and challenges for digital organizations today.

An over-focus on speed without respect for accuracy leads to problems, often quite public problems, for careless news organizations. Our ability to identify information and publish it quickly has sometimes outstripped our collective journalism judgement.

However, there is a reason that the speed-to-publish is a key part of journalism…our focus on speed is the result of an abundance of riches. The sheer amount of observers with social media accounts, cameras and audio devices pointed at every news event happening in the world gives reporters and editors the ability to access, and then rocket content around the world.

And this 24/7 ‘unblinking eye’ has brought us iconic real-time images of news events that we would never have seen before. So, sometimes speed is the point.

I have sat transfixed at my computer watching the Tahrir Square protests unfold in real-time in front of me. We saw Neda Agha-Soltan die before our eyes during the Iranian protests in 2009.  We gaped as we saw survivors from the crash of US Airways Flight 1549 being picked out of the Hudson River as it was happening through Instagram or hundreds of recording cell phones.

The sheer amount of observers with social media accounts, cameras and audio devices pointed at every news event happening in the world gives reporters and editors the ability to access, and then rocket content around the world.

The speed in which journalists and the audience itself have facilitated the publishing of raw content has acted as a powerful witness to important events. Events that only a few years before would have been hidden behind a veil of geography. In this case immediacy and realism – being THERE – is true journalism. The editorial judgement was simply to point the camera and not interpret the events at the time.

But there are downsides to the speed without journalistic curators and editorial judgement.

In the US for the Boston Bombing it was Pete Williams of NBC news who brought a strong journalistic perspective to rapidly evolving events.  And for the Arab Spring, for Twitter audience, it was Andy Carvin at NPR. Both of these journalists, whether doing original reporting or curating actual and supposed eyewitness or exclusive information stopped to ask the all-important question: “Do we have another source?” “Can we corroborate that?”  As Pete Williams described his approach to reporting “the essence of journalism is the process of selection.”



Accuracy is not the antithesis of speed.

Editorial judgement connected to digital workflows can work efficiently to produce sensical and accurate news content in near real-time. Like sources and speed, accuracy can be aided by technology, but today we have to reassess our understanding of accuracy in a digital world.

At the core of accuracy is context; what is reliable? What is verifiable? What is the public interest? Which public?  What is the proportionality of one story to another?

And there are new technologies that are still emerging as important tools in developing accurate reporting.


In the digital age, the public is not longer a passive, remote receiver of news–they are a participant. The best news organizations understand this; they don’t view the audience as a competitor, but instead as a collaborator. They may work closely with local bloggers to incorporate their product in their bigger publishing channels. Or, for the sake of improving the news, they reach out to the public directly to work with and aid reporters as they pursue stories in the audience’s interest.

This is exemplified by the experience of the Guardian as they could produce incredibly detailed and accurate information about British MP expenses in 2009. Faced with mountains of paper-filed reports – remember data as a source! – they turned to the audience to help process those thousands and thousands of pages of reports into data that could be analyzed. But beyond that they trusted their audience by asking them to not only digitize elements of the reports, but help identify what was interesting; in essence alerting the reporter to some of the juicy bits they could find from the MP’s expense sheet.  A “hey, this looks really, really bad!”

The success of working with the audience has led to Guardian Witness as a new crowdsource platform for their journalism.  On this platform, the Guardian can create a journalism task and ask for help from the audience in a reporting project.  Audience members might leave opinions, fill out a survey, go identify some data, leave a picture, whatever is needed that one or even a team of reporters could not hope to get.

The role of crowdsourcing in improving the accuracy is starting to grow.  For example ProPublica’s “Free the Files” project to help transcribe US political spending, which in turn led to the release of Transcribable, an open source project that journalists can use to build crowdsourcing projects.  Or OpenWatch where news organizations can task (or find uploaded content from) citizen journalists around the world with coverage of news events, such as the protests in Istanbul or in Egypt. Or even services like Storyful that helps you extend your editorial staff, allowing newsrooms to subscribe to their services of sourcing and verifying social content.


While this is still an emerging field, there are a number of people thinking about how algorithms and computer agents can help us more quickly determine the accuracy of information.  The Washington Post recently launched TruthTeller, an algorithm based process that compares transcripts of video and audio to a database of facts to see if politicians are telling the truth.


Finally, in the area of accuracy we have to think about the tantalizing potential of drones.  Drones give journalists new abilities to independently verify information, such as the extent of a natural disaster or an ability to monitor demonstrations from a birds-eye view.

When you combine these nimble, independent sensors with high-powered computing video/photo and audio analysis you get something that concerns many, including myself, about the potential of privacy violations. There is potential here for journalists, but we must be very, very careful about how we deploy such a powerful tool.

Accuracy, reliability and the ability to present verified information are key values of news organizations. There are new technologies that are helping us ensure that journalists can identify, vet, classify and ultimately increase the accuracy of our reporting.  We need to use these tools and embed them into our everyday workflows. It is somewhat ironic that at the same moment we have gained tools that have the potential to augment the core ethics of journalism they also undermine them.  And, of course, what is the most important element in the end is the quality of the individual.  A recent quote by Norman Perelstine of Time Inc. highlights this point.  His quote, paraphrased: “Pick the best editor and everything else falls into place.”

The next and last in this series of posts will turn to the people part; the jobs, skills and instincts in the new newsroom.

]]> 0
10 Tips for Rolling Out Enterprise Analytics Fri, 01 Nov 2013 21:01:12 +0000 Rebecca Shakespeare The biggest challenge for measuring digital analytics at BBG is the agency’s size. The BBG has more than 300 websites (most that are mobile responsive) and mobile apps, five separate organizations and more than 100 units that need individual reporting.

When we committed to getting an enterprise web analytics tool, we were using at least three separate Google Analytics implementations, a Sitecatalyst implementation, and other tools to measure whatever we couldn’t catch in those. VOA alone had 50 Google Analytics profiles.

While everyone was doing due diligence to maintain their analytics tool(s), we didn’t have a way to look across the whole organization’s digital properties. Some of the questions we couldn’t answer before were things like “Which BBG network is most popular in Vietnam?” or “Of our Russian-language content (many sites’ worth), what topics are most read by Russian speakers in Kazakhstan?” And every report about the BBG’s digital performance in general required calls for data and assumptions that the data all meant the same thing, even though it came from different places. It’s hard to make business and content decisions based on shaky data.

Planning for the new web analytics tool revolved around answering those questions, and making nearly instantaneous feedback about people finding and engaging with our content accessible to everyone–journalists (content reports by byline, so writers can see how their content performs), editors (content reports by topic, so it’s easier to get a feel for topical interest in a target region), marketers (everything that the BBG does, as consumed by a given city or country) and strategists (the whole universe of the BBG’s content, consumed by the whole world).

After lots of hard work, the expertise of some great thinkers and consultants, and really good feedback from our editorial teams, we’re anticipating a really exciting outcome–usable information that tells stories about all BBG entities online that nobody has ever had before.

As with all web analytics tool changes, we anticipate changes in the numbers we get–different tools count things slightly differently, so we may see all traffic increase or decrease by a constant amount. When we start tracking mobile visits too, we’ll see another change in traffic. I’m already looking at how our new setup is getting different numbers than the tools we have been using.

None of these changes mean that our audience has changed how it behaves. It just means we’re recording it differently. And the specific number isn’t the most important thing in web analytics; the stories the data tells and the information it can help you find are the valuable insights.

When you move to any new tool, you have a new baseline and a new normal daily or weekly number. You want to keep your eyes out for changes–good and bad ones–and determine what caused them. You want to monitor projects you’re putting effort into to see if you’re getting the outcome you want. And if you’re targeting a certain audience, you can get to know them based on what they do, and try to get to know more about what they respond to by testing things that you think they’ll respond to. For example, this might be a slightly different headline, a different angle, using more or less pictures, or promoting a story with a different hook on social media.

RebeccaBlog_Thumbnail (1)

If you’re planning an enterprise web analytic tool rollout, here are some tips:

1. Find great experts to advise you. We were lucky to get to work with outside consultants who helped us define reporting requirements before we selected a new tool, and to work with great vendor specialists who helped us turn those into reality. We not only have a setup with best practices, we learned a lot about the tools and their uses by working with industry leaders.

2. Thoroughly assess what information is most useful to stakeholders across your organization before you start setting up or selecting a tool. We had consultants come in and hear from our internal key stakeholders what they wanted to know. They assembled the organization-wide feedback and made expert judgement calls on what data we needed to gather and how we should present it.

3. Decide whether to use a tag manager. We had the luxury of choosing whether or not to get a tag management tool. We chose to get one because we have many different groups managing the technical side of our digital properties but wanted to maintain a unified analytics/measurement system. Using a tag manager centralizes our web analytics management.

4. Plan a clear, specific structure for naming and tagging. We worked closely with the technical teams to create a data layer on all of our websites containing uniform information about the page and the site. This means the data in our web analytics tool is clearly named across all of our web analytics report.

5. Keep a list of priorities. Know which reports or platforms you’re tracking are most important or most time-sensitive. Knowing what’s most important makes it easier to pick the elements that won’t get built when resources run low. Clear priorities make it easier to move forward to an actual release by instead of waiting to complete everything perfectly.

6. Know the field limits for your analytics tool and any tag manager. The last thing you want in your reports are awkwardly truncated page titles or worse – gibberish. Multibyte languages have more bytes than characters, and automatic truncation may garble them. One of our developers alerted me to this, and we made wiser decisions knowing exactly how much information we could track (Further reading on truncating multi-byte languages here from RFA Developer, Flip McFadden :

7. Find out what’s easy to set up but hard to change. Some things, like profile names, report suites, reporting heirarchy, and default values are better to set correctly at the beginning. Other things, like dashboards, are easy to change later. Know what to commit to early, and what you can wait on or change later.

8. Organize page-level and site-level variables early. This really only applies to implementations where you need multiple content management systems to track the same way in analytics. We created a matrix of all variables for each type of page on each CMS with sample values and notes for the developers. We also created a matrix of all site-level variables for each property. Both of these reference documents continue to be invaluable.

9. Make sure you know exactly which things happen in tool configuration and which things are coded onto your site. This is particularly important if you’re not technical. If your tag manager is separate from a web analytics tool, just give the developers tag management documentation. You’ll set up the web analytics tracking inside of the tag manager. If you conflate these, you’ll confuse yourself, and probably slow down development work.

10. Prepare to spend a lot of time checking that your new tool is configured correctly. Good documentation, including what domains you expect to see in what reports and a complete list of all reports, is really helpful to have  here.

Special thanks to Ahran Lee, Designer at BBG, for creating the artwork for this post.

]]> 0
National Day of Civic Hacking at the White House Mon, 10 Jun 2013 15:07:33 +0000 Lisa Backer June 1 was the National Day of Civic Hacking.  Over 11,000 civic activists, technology experts, and entrepreneurs in 83 cities developed software to help others in their own neighborhoods and across the country.  The White House hosted more than 30 developers and designers to work with the We The People data API – the API that powers the We The People petitions website.  Attendees had submitted ideas and portfolios to be considered back in April and I was one of those selected to participate.

The day began with welcome notes from our hosts, the White House Office of Digital Strategy.  Representatives from NASA lamented that this day would already beat their record for the largest Hackathon held only a few weeks prior.  From there, each participant gave a brief introduction of who they are, the skill sets that they bring, and the any project ideas that they had.  The room included talent from local and federal government agencies, private industry, and even representatives of technology giants such as Facebook and Google.  Many had travelled across the country just to take part in the event.

I formed a team with Bryan Braun, a contractor with the White House Office of Digital Strategy, and Ben Damman, an entrepreneur from Minnesota to work on Petitions NewsLink.  Our goal was to display aggregate data about popularity of petitions over time by issue and allow the user to dig deeper by viewing related news from a point in time that is related to the most popular petitions within the selected issue.  Our hypothesis was that many petitions were formed in response to major news events.  One of most obvious examples would be a spike in submitted petitions in the “Firearms” issue category after the tragic shootings in Newtown, CT.

In order to put this puzzle together, our team decided to utilize:

  • Highcharts JS library for data visualization
  • We The Entities API to perform textual and sentiment analysis on petition text.  This API was actually created just in advance of the Hackathon by a fellow participant and was under development throughout the day.
  • New York Times Article Search API in order to search articles by keyword in particular sections

We also quickly learned that we needed a caching layer in order to perform aggregate look-ups on the data.  Ben took the role of creating this layer,  Bryan took the user interface role, and I took the role of communicating with the We The Entities and New York Times APIs.  By this time everyone had settled into teams and set down to work and the room was quiet.

My first thought had been to also utilize the New York Times Tags API in order to correlate We  The People issue categories with New York Times tags that could then be used to limit new results to only those most relevant.  After spending a good deal of time working through their API, however, I began to realize that the data set that was returned limited by tag became to limited when faced searching on a short window of dates.  With that work tossed aside, I began to focus more on a workable demo by our 4:30 demo time.  We agreed that our goal for the first demo was to have a broad resultset and not necessarily spend our time refining the searches further – that work could be saved for the next phase.  As our demo time approached we worked feverishly to finish up.

The demos showed us a broad set of projects that had been created given a small, but powerful, We The People API.  It included two WordPress plugins for embedding petitions, a data visualization that showed party affiliation of signers, mapping visualizations, and type ahead tools to enhance searching and filtering of the data set.  Of particular interests to journalists was a tool created to import data directly into Google spreadsheets to allow non-technical users to analyze and chart data within a familiar toolset.  The full API gallery is available online.  Most projects include links to demos and/or source code.

Although the day is over, the work continues.  I have continued to refine the news API integration and have rewritten it to a lighter-weight JavaScript implementation.  The original Petitions NewsLink team is still communicating via GitHub to improve the project.  I would like to see the integration of other news sources and the refining of the date algorithms used for searching.  My interest has also been sparked to use available APIs to find out not just what caused petitions, but if petitions do spark action in congress.  A future project will likely be to use the Sunlight Foundation’s Capitol Words API to search for key terms from the petitions to see if they have been used within the Congressional record.

]]> 0
Innovation Strategy on a Global Scale, 2013 Tue, 26 Feb 2013 06:24:52 +0000 robbole The Office of Digital & Design Innovation launched roughly a year ago with a very straightforward mission: the expansion and usage of digital platforms to grow our global online audience.  We do that by working with our partner media networks to bring best-in-class platforms and services, as well as experimenting and launching innovative new technologies that speed our transition in serving increasingly online audiences.

Over the last twelve months we have been working on the “ground game”, by migrating off of old platforms, adopting new agile software frameworks and generally preparing the ground for faster innovation.  I am very proud to say that with our close partners, especially with Radio Free Europe’s digital team, we have fully turned over all of our core infrastructure on-time and on-budget.  And in an unprecedented event, we will be able to take some operational savings and invest in new areas, such as expanding our mobile presence and improved digital syndication.

In this current year we are going to expand our presence and quicken the pace of introducing new products and services.  We have a mandate for change and now are fully ready to drive innovation that leads to audience growth.

Here is our plan for 2013.


2013 Strategy & Goals

1.  Integrate Digital Platforms: Develop our new core digital services to an effective “run” state in order to provide normal enterprise operating services to all of USIM.  ODDI is working closely with our colleagues in RFERL Digital, as well as with RFA, MBN, VOA and OCB, to ensure that our core services, such as the online video/audio platform (OVAP), mobile web and mobile applications, are effectively established for all of USIM.  In many places we believe that integration into the “Pangea core” and RFA’s system will enable important improvements in our operating efficiencies.

Digital platform highlights include:

  • Full integration of the Kaltura online video/audio platform (OVAP) into Pangea: ensure that video and audio management becomes a ‘back office’ function to a user of the Pangea CMS and enable seamless distribution to all USIM accounts, including external accounts like YouTube and SoundCloud.  We also want to do a complete implementation of mobile-compliant audio/video players for iOS, Android and other mobile devices.
  • Deliver enhanced live streaming capabilities for 24-hr “true” Mp3 audio playout: create capabilities for streaming services on digital channels such as Apple iTunes, TuneIn, Stitcher and other radio streams.
  • Expansion of Direct to provide services to all entities and all content types: provide technical connectors to allow all entities to seamlessly publish a wide-range of content types (broadcast-quality to Internet-quality rich media, text, photos, etc) for a range of broadcast and digital affiliates.
  • Launch “measure everything” platforms: launch new platforms and technical services to ensure cross-agency tag management, web analytics, social media analytics and video analytics.  In addition, launch a powerful analytics application programming interface (API) and customizable dashboards of real-time analytical data for all levels of the organization–from the Board down to the editor and reporter levels.


2.  Grow Mobile: Drive future (“road map”) improvements and expansion of our mobile platforms and services to increase our global audiences.  Mobile is the single most important method for USIM to be able to reach audiences.  Statistics often point to the fact that mobile adoption has a lot of room to grow or that there is a clear ceiling on the use of fixed-line broadband in different regions.  Our goal is to deliver the platforms and services that enable all entities and language services to deliver content across all mobile devices–from high-bandwidth IPTV applications down to simple feature phones.  And, just as important, we want to facilitate the use of voice/audio over local phone calls.

Mobile highlights include:

  • Launch of new news “Umbrella” applications for all five entities.  In conjunction with the entities, we will be launching and improving a range of mobile news applications.
  • “Responsive+” on core digital platforms.  Re-development of our core digital sites to utilize both responsive web design and progressive enhancement with server-side detection through a mobile-first strategy.  This change will enable us to provide digital content across a wide range of devices and bandwidths, customizing the content for the user, based on their device’s hardware and software capabilities and network connection.
  • Expansion of IVR and other low-bandwidth mobile publishing.  Improving existing open source frameworks to enable enterprise Interactive Voice Response (IVR) services to enable low-cost local calling for the audience, and low-cost operational costs for BBG.
  • e-Book, magazine publishing improvements.  This year we will be piloting a number of design templates and easy workflows to create interactive books and magazines for the distribution of collections of content both in static (text) and dynamic, rich media formats.


3.  Expand Audience Engagement: Implement an innovative initiative that builds a USIM-wide, audience-centric sourcing, storytelling and distribution service. We are focused on elevating the role of the global audience in our work as journalists, from enhancing simple commenting and discussion tools to supporting direct audience participation while covering events. Audience engagement occurs within a news organization when three critical pieces align: business strategy, technical capabilities and editorial management.  Our office will elevate the notion of audience engagement throughout our language services while simultaneously increasing our digital capabilities.

Audience engagement highlights include:

  • Strengthening core content (text/audio/video) platforms.  Working closely with RFERL and TSI, we will focus on enhancing our current infrastructure, as well as adopting or building enhancements to platforms and services that enable audience members to participate in our journalism.
  • Interactive storytelling expansion.  We are introducing a number of new JavaScript and other frameworks to enable new types of storytelling by our journalists.  Our goal is to identify, seed and then support a core group of video and audio producers to understand and use Popcorn.js, Timeline.js and other frameworks to publish interactive content–especially using audience-generated materials.
  • Audience engagement testing.  In order to engage with audiences, we need to understand their interests, preferences and cultural lense in order to present compelling content and product that encourage their participation.  We will be partnering with BBG Research to identify and test digital products in-country, especially to discover better ways to create and develop content with audiences.


4.  Grow Digital Affiliates: Expand the number of websites and digital services that carry USIM content through new API and other syndication services.  Our goal is to: 1) replace expensive satellite distribution for lower-cost Internet-based distribution wherever possible; 2) increase the ability for ALL entities to share, distribute and create content with local partners; and 3) build a new class of “digital affiliates” in the form of syndication points (i.e. Google Currents), blog networks, emerging all-digital news organizations, etc.  Our goal is to build an expanded “affiliate storefront” using a robust application programming interface (API) strategy.

Digital affiliate highlights include:

  • Increased syndication partnerships.  This includes regional goals whereby we will launch two to four quarterly syndication agreements with global partners, as well as targeted regional syndication deals in Eurasia, Africa and Southeast Asia.
  • Direct API/digital affiliates program.  We have three goals in this area: 1) the integration of Direct with our Kaltura OVAP system for the inclusion of Internet-quality video and audio content in affiliate distribution; 2) integration with OSD’s customer relationship management system to enable affiliate information to flow between the two systems; and 3) a public-facing API to enable existing affiliates, as well as the potential for a new class of “digital affiliates”, to have our content delivered to them dynamically.
  • Strong syndication analytics system.  This includes the expansion of our analytics platforms, as well as offering training and simple dashboard tools, to enable a more robust tracking of digital content usage by existing and new affiliates. We hope to provide business/editorial managers with more information on the use and consumption of their content by third-parties.


In order to accomplish these goals, ODDI is going to continue to evolve its operations and capacities.  We have been replacing remote vendors with an increasing number of “makers” at the staff level, or through full-time, in-office, contractors.  As resources become available we will be adding additional capabilities to the office.  We will be continuing to balance an expanded, full-service, in-house capability to build, maintain and grow a range of new digital platforms with a rational number of high-quality, best-in-class vendors.  In particular, we will focus on expanding our capacity in three critical areas: technical development/programmers, user experience design/storytelling support and increased services for doing digital data analysis in support of product development and strategy.

If you have any questions, comments or thoughts in improving our 2013 strategy please let us know!  [You can leave a comment below or contact us on Twitter (@BBGinnovate).]

- Robert Bole, Director of Innovation, Office of Digital & Design Innovation

]]> 1
HTML5 Mapping and Interactive NOL Thu, 10 Jan 2013 00:49:17 +0000 April Deibert A few months ago, I posted about HTML5 video and Randy Abramson (Director of Product and Operations) posted about News On Location (NOL).  In this post, however, we want to take a look at how HTML5 mapping is evolving and how BBG’s NOL may make use of the technology in the near future.

Knowing the location of users (with their permission of course) can be a good thing both for them and for one’s service.  Not only do users often feel that they’re receiving personalized results, but there is potential for them to contribute to live maps and live feeds—making their entire interaction with your site more relevant.  This is great news for you because with an improved UX, your web metrics have the potential to flourish.

Here’s a few examples of some really cool HTML5 mapping uses and how similar techniques could be applied to NOL:


Austin Music Map

Description:  “…Austin is full of amazing musical moments. Lots of them where you least expect them. That’s where you come in. The Austin Music Map is a collaboration between KUT Austin and YOU. Take us into your corner of the city. Show us a musical venue we’ve never heard of before. Surprise us with your favorite undiscovered musician.”

Why it’s cool: Austin Music Map instructs users to snap a pic with their phones, make a video, or record a story about one of their favorite musical moments in the city—then post it to the website.  By tagging the media with the venue and neighborhood where it was captured, plus hashtagged words that describe the event, users are able to add to a growing public playlist so you can “play the city”.

Applied to NOL: NOL wants to bring local news and culture to life through the sites and sounds of the people on the ground.  What better a way to provide users with a way to participate in ‘remixing the news’ by adding photos, videos and sounds? Users could tag items to create interactive, region-specific media playlists.



SoundCloud API

Description: “If you build an app or web service that generates any type of sound, it’s easy to connect it to SoundCloud and enable your users to share their creations across the web. Allowing users to share what they create to their existing social networks and the SoundCloud community brings great value in a variety of use cases. … Letting users share tracks is also a great way of virally-promoting your app. Uploaded tracks will automatically be tagged as uploaded with [your app], so when a user shares a track on Facebook, his friends will see what app the track was created with.”

Why it’s cool: You can share sounds and recordings from your specific location, then share or embed them.

Applied to NOL: Ever wanted to hear an unedited clip from a revolution to see if you can understand what people are really saying on the street?  You could.  Ever want to pump up your speakers for a dance party in your own living room listening to a live stream of your favorite band performing at a music festival in your home country?  You could. News On Location could use the API to allow users to share audio commentary from the ground.  New users that come to that spot could then react to that clip and create their own contributions to the conversation.




Description: “Remake the Internet—Zeega is a community of makers passionate about creating immersive experiences that combine original content with media from across the web.”

Why it’s cool: Zeega was demonstrated at the London Mozilla Festival in 2012 and exposed developers to how it’s more than an interactive storytelling tool.  Zeega’s developers liken the technology to Tumblr or WordPress.  In a nutshell, a blog can be transformed into a rich interactive site—full of audio, video, images and a means of easily sharing everything.

Applied to NOL: Again, users could collect sounds and photos on mobile devices and geo-tag them.  Once uploaded, the photo and audio could be remixed into Zeega presentations that could be consumed on desktop or mobile devices.  The playback of these presentations could allow for users ‘off-location’ to feel like they are ‘there’ with the contributors.  Social integration between the presentation and users on location can further dialogue between the ‘on-and-off-location’ participants.




Description:  “Design maps in the cloud, publish in minutes.”

Why it’s cool: Custom maps can be designed and published in minutes (all powered by OpenStreetMap data).

Applied to NOL: Journalists can create custom, detailed maps of specific events.  Colors and styles can be changed, terrain layers can be integrated to show elevation, and maps can be annotated with pins, symbols, icons and interactive tooltips.  Maps can then be shared or embedded and represented with the NOL application for ‘on-and-off-location’ users.



Additional Examples and Resources:

- KartographKartograph is a simple and lightweight framework for building interactive map applications without Google Maps or any other mapping service. It was created with the needs of designers and data journalists in mind.

- Georelated: A blog full of articles that relate to the art of web mapping. Includes a lot of technical reviews detailing what’s possible now and may be possible in the future.

- GeoCAT [Video]: Perform rapid geospatial analysis of species in a simple and powerful way.

- SVG Open Conference (2011), “Even Faster Web Mapping”, by Michael Neutze.  [Neutze’s video presentation and slideshow can be found here.]

- HTML5 and Esri-based Web Mapping [Video]

- HTML5 Canvas Visualization of Flickr & Picasa API [Credit: Eric Fischer of The Geotaggers’ World Atlas]

- PBS FRONTLINE’s Interactive: David Coleman Headley’s Web of Betrayal
[Read more about the process here]


- – - – -

(Thank you to Randy Abramson, Eric Pugh and Rob Bole for their link suggestions, quotes and additional contributions to this article.)

(The foregoing commentary does not constitute endorsement by the US Government, the Broadcasting Board of Governors, VOA, MBN, OCB, RFA, or RFE/RL of the information products or services discussed.)

]]> 0
Data Visualization: That Was Simple! Mon, 05 Nov 2012 23:14:11 +0000 April Deibert [ Many thanks to Ahmad AbouAmmo for this great blog post.  I've cross-posted it from his blog. ]


Data Visualization: That Was Simple!

Working with data is not about numbers; it is about how to creatively present it. Digital media professionals process lots of data on a daily basis. Analyzing this data allows us to find important trends and information, which can be shared with customers (and of course senior management). There are many ways to present the data, such as a simple table and results ? boring, or creatively visualize it with images, maps, and other graphics.

One of my favorite ways to visualize data is Google Fusion Tables. I will discuss in this blog how to create a map that shows Obama and Romney twitter mentions in the Middle East and visualize the data on a Google map.

Step 1:

Before the word visualization comes the word data. Finding a specific data, whether based on a hunch, or trends, is the first step. There are many ways to do so, and one of them is social media monitoring tools. These tools are perfect to collect massive social media data based on many variables, such as demographics, location, keywords, etc. There are many monitoring tools available in the market, such as SalesForce Marketing Cloud (formerly Radian6), Lithium, Sysomos, and others. In this blog, I will be using Sysomos MAP.

I won’t be discussing in this blog how to use social media monitoring (I will leave this to another blog). I will, however, provide my search criteria:
1.     Search for the word “Obama” in English and Arabic in each country in the Middle East
2.     Search for the word “Romney” in English and Arabic in each country in the Middle East
3.     For this search, I will provide a 6 months period
4.     I will only record twitter mentions
5.     Make note of any specific keywords in each country related to each candidate

After collecting the data, I usually save it in an Excel sheet, which allows me to do further analysis if I need to. You can download the excel sheet here.

Step 2:

Review your data. What are the most important trends? Which country is the one with most tweets? What is the most important keyword for each candidate? Asking few ideas here help to know the way the data will be visualized in Google fusion tables.

Step 3:

Okay. Now we have the data, and we know what we want to visualize. The fun part begins:

1.     Go to this link:
2.     Login using your Google account (did I mention you should have an account?)
3.     In the main window, go to the “Create” button on the top left
4.     Click on “More” and then “Fusion Table (Experimental)” <– don’t let this scare you

5.     Upload your data. In my case, I will upload my data from the excel sheet I created (You can download the excel sheet I created here to use for your test)
6.     Once uploaded, click on “Map of Geometry”
7.     Google fusion will locate the countries on the map and add their related data
8.     Skip to step 17 if you do not want to highlight the countries’ boundaries

Step 9 to 16 is advanced:

9.     To highlight the countries that contain the data, we need to locate another table that contains borders information. Here is a link for a good library
10. Once in the new table, click on “Visualize”, and then “Map”
11. Copy the URL

12. Go to the your original table and click on “File”, then “Merge”
13. Paste the link of the boundaries table data into the “Or paste a web address here:” and then click “Next”
14. Choose corresponding columns that contain similar data. In our case, it is Country and Name. Click “Next”, “Merge”, and then “View Table”

15. The new “fused” table is created. Now click on “Map” to view that data visualized on the map. You may need to zoom in
16. You can improve how the highlights look like. Go to “Tools”, and then to “Change map styles”


17. To control the data showing, you have to be in the Map view, click on “Table”, then “Change Info Window Layout”
18. You may add and delete any data you want to show in the info window

19. Once you are done, click on “Tools”, “Publish”. You will have to make the data public by changing its visibility

20. Copy the link. We are done :)

Here is how the map will look like:


Follow Ahmad on Twitter: @ahmadaa


]]> 0
Designing the Future of News Thu, 11 Oct 2012 21:22:25 +0000 April Deibert Update via Steve Fuchs, ODDI’s Manager of Design:





New York media design firm Theo and Sebastian has unveiled a detailed design draft of a responsive/adaptive VOA Pangea site.  The new look is a culmination of efforts to design the future of news and represents a 16-month effort spearheaded by ODDI Creative Services–in collaboration with VOA and RFE/RL.






The new look:

- Features a responsive/adaptive content and grid structure (one design adjusts the same content to fit web, tablet, and mobile)

- Leverages Pangea’s “widget-based” structure for easy to produce modules for long-term scalability

- Has a range of scalable modules for all VOA Services’ content needs (there is a widget for everything) and pays special attention to “TV-First” and “Radio-First” language services

- Enables flexible photo presentations (large, small, many, or even no photos will all look good)

- Supports language friendly design conventions (no restrictions on headline lengths, wider columns for lengthy languages)

- Features dedicated advertising and promo spaces

- Improves typography

- Supports program pages with inline players and prominent integration of social media

- Displays indicators of social engagement and popularity

- Is easy way for editors and staff to curate and manage

- Has a personal playlist feature with content saved for reading later

- Supports program pages with inline players and prominent integration of social media

- Displays indicators of social engagement and popularity


- Is easy way for editors and staff to curate and manage


This new design will become a master template for further VOA responsive/adaptive development on Pangea.


- – - – -

(Thank you to Steve Fuchs for his contributions to this post.  To contact Fuchs: sfuchs at bbg dot gov]

(The foregoing commentary does not constitute endorsement by the US Government, the Broadcasting Board of Governors, VOA, MBN, OCB, RFA, or RFE/RL of the information products or services discussed.)

]]> 0
Interview: Daniel Jacobson & How API Drives Digital Media Strategy Tue, 02 Oct 2012 15:07:29 +0000 April Deibert

Daniel Jacobson is the Director of Engineering (API) at Netflix and author of the book APIs: A Strategy Guide (OReilly Media, 2011).  After a long stint as part of and then the leader of NPR’s digital team, Jacobson gained vital experience addressing difficult API technological and business strategy challenges.  Netflix took note of his innovative approach and brought him on board to join their staff of technologists.  This is the transcribed conversation had by Office of Digital and Design Innovation Director Rob Bole, ODDI Multimedia Blogger/Producer April Deibert and Jacobson about how to strategize internal and public API development.


[Video: What is an API? (Volume I), Source: apigee on YouTube]


Bole: Why write the book and what have you found so far?

Jacobson: The purpose of the book is to focus on the business aspects of API development.  There were already a lot of books about the technical aspects, design and best practices of API development.  The other authors and I felt that there was no holistic picture about API–about the benefits, about the detriments, about the legal ramifications, and about the security ramifications.  We wanted to just step back and think about all the things that you have to consider when you are thinking about an API and how that translates into execution or implementation.  So one of the key principles that we try to lay out throughout the entire book is that you should know your audience.  For example, if your audience comprised of internal development teams and you don’t open it up publicly, that has different legal ramifications than public developers.  We felt that was a really important message that we needed to convey, especially as the API industry is really ramping up.


Bole: In your book, you discuss that the real growth of APIs has been internal (a core business mission) rather than external (crowdsourcing innovation).  Do you recommend internal or external strategy first for API development?

Jacobson: Different businesses thrive on different things.  I think Twitter has benefited from their public APIs and they wouldn’t be who they are today if they didn’t do that.  But, they are changing their strategy now.  It really depends on the business.  That said, when NPR launched their API and Netflix launched their API, both companies had the idea of “let 1000 flowers bloom”.  Meaning, create a field of opportunity and then see what sprouts up around it.  Both companies wanted to take advantage of the crowdsourcing opportunity.

However, before I left NPR, it became very clear to me that the public API wasn’t really going to change anything for us in a meaningful way.  The major transformation was the internal consumption, such as: building iPhone apps off of the API, distributing to member stations, letting member stations post into the API, then redistributing that information out to other member stations.  It is about using your API to create tighter technical relationships with your partners.  This was beyond what you would see in the public developer world.  Similarly, at Netflix, it’s magnified here.  We have a bunch of public developers who work on a bunch of apps, but it pales in comparison to the impact that the Netflix API has had toward our device proliferation strategy.  We currently have about 800 different device types.  The API is seeing about 2 billion transactions per day through our streaming applications.  The public API is doing less than 0.1 percent of the total traffic.  That is negligible compared to the impact of our internal API strategy.  So, it depends on the company and how that company can leverage the API to support their business.


Bole: What is the API business strategy like at other companies?  How do APIs play out in terms of engagement by the business development crew and the engineering teams?   

Jacobson: Here’s how I characterize it: some people like to view an API strategy as ‘what is our business strategy for the API?’  I think that’s the wrong approach for that type of model.  I think what we’re really saying is, ‘what’s the business strategy and does an API help us satisfy that business strategy?’ So, the business strategy at Netflix, for example, we want to be ubiquitous across all devices and where ever that user is, whatever they have, we want to make sure they can get it.  How we can most likely leverage that in a highly effective and efficient way and use the economy of scale to do that?

API is a technical implementation that helps you achieve that.  I think that’s, for the internal use case, the way to think about it.  And we think about this in terms of our metrics.  As an example, I think a lot of people think the metrics that you should care about are how many requests is the API getting, or, how many apps are driven off the API. I think those are the wrong metrics for an internal use case.  The right metrics are: how is the traffic that is coming in from the API mesh with the rest of the traffic that you care about?  It’s the logging that you care about for your system.  Is that representative?  Are there things that we can do better within the API to better support the user experiences?  Or whatever the engagement is for your business.  Don’t think of the API metrics as API metrics, think about them as part of the ecosystem that supports your business strategy.


Bole: How aware is the business development team of API technology and how do you work with them?

Jacobson: They are extremely aware of the API as a way to facilitate partner engagement.  Plus, most of the product managers and business development staff are very aware of the role that the API plays in our product development.  They think of the API as one of many components of our entire product, but are aware that it is the distribution engine that gets our metadata onto devices in our customers’ homes.


Bole: What is your role in working with the business teams?

Jacobson: The API is in a unique position within the overall product stack, so I think of it as an hourglass.  At the top of the hourglass: all the UI’s, all the devices, and all of the product people who think about what we need to deliver as an experience to customers.  At the bottom of the hourglass: many of the back end services (like the movie metadata database, the subscriber information, the ratings algorithm, the recommendations, and the search service).  The API is in the skinny part in the middle of the hourglass.

My role is to make sure that we can broker the data (between the top and the bottom of the hourglass) in a highly efficient, resilient and scalable way.  So we’re involved in quite a bit of the product development for virtually all of our UIs  We have to work with the back end services to make sure that the data needed is exposed, or if its not currently exposed, get teams to interface with other teams to build a patch to get it delivered.  So my team is kind of a broker in terms of data and in terms of product development; we make sure everything moves.  And because the business development teams, partner engagement teams, and product managers drive many of the goals for these UIs, the API teams’ involvement with them is quite high.


[Video: What is an API? (Volume II), Source: apigee on YouTube]


Bole: BBG is a media news organization, so it’s about moving product or content to people’s devices.  In your opinion, what is the role of APIs in news organizations now and in the future?

Jacobson: When I was at NPR, they thought of themselves as a journalism organization with a digital media arm.  At Netflix, we think of ourselves as a technology company.  We staff ourselves accordingly; more than half of our staff are part of product development.  We’re certainly in the media space though because the core of what we deliver to people is media, but we’re doing it with a very technology-oriented mindset.  That’s an important distinction because it makes the technology part of the DNA of what we do, instead of just delivering videos to people.  NPR for example, the staffing is in balanced in the favor of non-technologists.   That presents some challenges in terms of capabilities.  When I was at NPR and helped to develop the API, I knew that it needed to be highly efficient for what we were doing because we didn’t have the resources to do a lot of experimentation.

Media organizations need to really think about digital strategy.  What is the digital strategy and what is the role that it plays within the organization overall?  Staff accordingly and get the right skill sets and the right number of people in the right positions.  Focus on what the products are that you’re delivering and the API is going to come from whatever that strategy is.  If you think it will be a crowdsourcing model, then you will want to focus more on public API developments and doing things that will promote external development of the APIs.  If it’s an internal strategy, then you need great technologists with the right skill sets who can build and leverage APIs.  If your strategy is all about mobile, then you need people who can work on those products and consume from those APIs.  The API is a service that will help you develop your digital products.


Bole: What are some of the companies that are ‘doing it right’?  How can one execute against using an API to advance a news or media strategy?

Jacobson: NPR is doing it right.  Things have really evolved there in the past few years. While I worked there, we started thinking ‘let’s develop this API and then let’s see what public developers will do with it’.  Then Bradley Flubacher built the NPR Addict iPhone app off of our public API and that got us thinking that we should build our own iPhone app which ultimate sat on top of our internal APIs.  We realized that we needed to really leverage this great tool so we can work with our member stations and reach the growing number of distribution channels.  They are staffed to support that entire ecosystem.

I don’t know enough about the details around some of the other media companies, but I think USA Today launched a public strategy about a year ago and ESPN launched one about 8 months ago.  What they have done is try to use API publicly and internally and launch strategies around the same time.  When I talk to them about their internal strategies, I think that they’re moving in the right direction.  I would possibly question whether or not launching a public API is worth it at that point—based on my experiences–and I think I’ve told them that much.  I think that’s part of the evolution, as you’re starting to adopt a strategy, you have to start thinking about the things that are opportunistic for your strategy, over time you’ll learn, and then adjust that.  One other key point is that the strategy should be an evolution; it’s not like, okay, we’re doing this for the next five years.  You’ll learn pretty quickly–if you’re NPR, or Netflix, or maybe even ESPN or whoever else–that either the public API is going to be worth the effort or not.  You should make adjustments based on that and change your strategy.  And as your strategy changes, that should impact your staffing as well.


Bole: What’s the future of APIs?  Where should BBG aim for in the future?  What are the trends that are most important?

Jacobson: The best bet is to think about internal API and to find a discrete use case or two and then have it developed.  See how it evolves and grows.  I think there’s a strong tendency for people to start with a public API because of the iceberg theory talked about in my book.  The tip of the iceberg is above the water and highly visible, representing the public APIs.  But the overwhelming mass of the iceberg is not visible below the water, which represents the internal APIs.  So people tend to get lured in with the notion of public APIs because they see other companies releasing and evangelizing them.  What they don’t realize is that many of these companies are really leveraging APIs mostly for their internal case.

So I would stay clear of the public APIs at the outset for two reasons: 1) I think in many cases, the value proposition is not as great as the value proposition of the internal case; 2) it’s a lot more expensive and harder to get going than an internal case because you have a lot of external considerations.  These external considerations can include legal concerns, securing rights to content (making sure you can offer content if you’re getting it from other sources), what does your business want you to deliver, making sure you aren’t going to cannibalize your other business strategies, and how you are going to monetize the distribution of the content for public use.  Also once the public API is out there and used by a other people, you can’t easily take it away without upsetting them.  So there is a public relations risk there.  I think it’s riskier and more challenging go forward with a public strategy, especially if you don’t have a clear value proposition.  If public APIs are part of the strategy, then launching first with internal APIs can give you confidence in the development and change process of running an API without the initial risk associated with a large set of external developers depending on it.


Deibert: How would the development of an API affect USIM in the short term and the long term?

Jacobson: It’s hard to differentiate international and U.S. media—with digital, the barriers of proximity are broken down.  At NPR, for example, the member stations were how people consumed information in the past because the radio towers were near where they lived.  Now digital is coming into play—so you can get your home stream while you’re (on the opposite coast or overseas).  I’m sure there are people outside the states that are reading and subscribing to the New York Times, but that’s not something they could have easily done 15 years ago.  In the future, the majority of the consumption will be closer where the company is, but I think people should be thinking more about an international strategy if the content has international appeal.  APIs offer that.  For example, as Netflix broadens and goes to more countries (UK, Latin America, and the Nordic region), APIs play a key role.  That’s how we get into people’s homes; location is not the factor, it’s just about having the right pipeline to get there, assuming that is part of the overall business strategy.


For more info:


- – - – -

(Thank you to Daniel Jacobson and Rob Bole for their contributions to this post.)

(The foregoing commentary does not constitute endorsement by the US Government, the Broadcasting Board of Governors, VOA, MBN, OCB, RFA, or RFE/RL of the information products or services discussed.)

]]> 0
Weekly Web Analytics Check-Up (for busy managers) Fri, 24 Aug 2012 10:12:31 +0000 Rebecca Shakespeare This is a follow-up to post on a 5-minute web analytics snapshot.

Even as a busy manager, you should be able to find 2-3 minutes a week (or every day) to check in on your website traffic, your top stories, and to identify whether there’s a problem. This checkup will give you tactical information that you can use today to inform your editorial and marketing decisions.  It won’t take long unless you discover a problem – and then you’ll be able to act quickly to resolve it.

This blog post will guide you through three questions you can answer using web analytics:

  1. How’s my website doing?
  2. What are the top stories on my website?
  3. Has anything big changed on my website? If so, what actions can I take?

1. How’s my website doing?

This is a repeat of what you did when you made your baseline worksheet. Check daily traffic patterns and weekly visits against your baseline to see if anything is unexpected. (In Google Analytics, Audience> Overview)

Check: Does your daily traffic follow the pattern you’re expecting? Change your view to weekly to see if your weekly visits is within your expected range. Remember to ignore the first and last weeks displayed in your analytics tool – they do not always include a whole week of traffic.

Is everything as expected? Great, move on to step 2.
Do you see an unexpected change – high or low weekly traffic, or an odd emerging pattern? Ask yourself if anything else is going on.


  • If you see low traffic, can you explain it? Maybe your web editor is on vacation, your target audience had a holiday, or it’s just a dismally slow news week. Or maybe something’s wrong. If you can’t explain something abnormally low, ask your web team or ask your resident analytics expert. Something might be wrong.
  • Did you break a big story that resulted in high traffic? Send a quick note to your staff in congratulation or to your manager to let them know!

2. What are people reading on my website?

Page and Page Title views in Google Analytics

Check: Change to viewing just a day of traffic. Yesterday is a good place to start.  Go into the All Pages report (from the left nav, Content > Site Content > All Pages).  In Google Analtyics, you can change the display from showing URLs to showing headlines by changing the primary dimension from Page to Page Title (see the picture, right).

Look at the top stories for the day (ignore your home page and any pages that do not have a headline). If you use advanced segments, you can look at just what a specific part of your audience is reading. Remember – there are only 1-4 top stories a day. Look at the number of page views to see which stories really stood out in terms of page views. If all of your stories have the same number of page views, none of them are the top story.

Analyze: After a few check-ins, you’ll get an idea of whether your audience is interested in particular topics, only comes for breaking news stories, or really loves your video page. Your goal here is to see trends in interest.


  • If one story had more traffic than you expect a story to get, send an email to your web team and congratulate the author.
  • Keep yesterday’s top stories in mind when you go into editorial meetings. Once you’re looking at them regularly, you’ll have a sense of what stories, and what type of content plays well online.
  • If you know a lot of effort or good journalism went into a web story and you don’t see it at the top, check in with your web editor to see if they can highlight it or promote it on social media.

3. Has anything big changed?

Check: Change your timeframe to look at the last 7 days. Look at the percentages in the Traffic Sources overview and the Location report. (In Google Analytics, Traffic Sources > Overview and Audience > Demographics > Location) Do they match the percentages you’re expecting on your baseline?

Analyze and Act – Location Change: Did the location of your audience change? If your audience changed dramatically from one geographic region to another, your audience composition may have changed. Ask yourself if that’s an opportunity (a new target audience maybe?) or a problem (your target audience isn’t coming to your website).  Use this knowledge to guide marketing and content decisions.

Analyze and Act – Traffic Source Change: If your traffic sources changed dramatically to favor one source that you’re not expecting, look and see what your top referrer is. Compare to last week to see if anything major has changed.

  • If Reddit, Balatarin, Huffington Post or another huge website linked to you, you’ll expect short-term surge in traffic – this is unreliable traffic, but you can be happy for the increase!
  • If you see a huge increase in traffic from Facebook, Twitter, or other social media site, let your social media manager know that you noticed.
  • If you see an unexpected referrer that might make a good digital affiliate, pass it on to your web editor or division, or reach out to them yourself.
  • If you see a huge change in your search traffic, your search engine optimization might be getting better, or a huge drop might mean something is broken.
  • If you see your direct traffic fall, you should check on your newsletter or on-air announcements about your website – these are some of the ways you get direct traffic.
]]> 0