Building Stronger Partnerships as a Data Scientist
Questions to ask and other tips
I’m a senior data scientist on Square’s Seller Experience team and I work on our online Help Experience. Our team provides relevant and targeted support to sellers at Square, allowing them to spend less time seeking help and more time running their business. I’m the only data scientist embedded on our product team and am the domain expert on all things data-related for Help. I’ve been working at Square for 2.5 years and truly enjoy working here.
If you’re a data scientist, you must know how frustrating it can be when you’re handed a predefined list of metrics to look at. Our skillset isn’t just confined to pulling data and crunching numbers; we can be powerful thought partners who can bring a different, data-enriched perspective. We can ensure that our teams are designing products and analyses with the right business questions in mind. Though our functional partners may know this, they might not always know how or when to engage data scientists into product discussions.
Square has an embedded data science model, which means our data scientists are fully integrated within product teams while still reporting to org-specific data science (DS) leads. One of the major benefits I’ve seen from this operating model is that it fosters strong collaboration with the rest of our product team, including product managers (PMs), designers, and engineers. Being plugged into all our team’s product work, not just weekly syncs, has helped build me strong relationships – but that hasn’t eliminated the need for me to ask pointed questions and push back on various data requests where I don’t think we’re looking at the right things. In my time working as a data scientist, I’ve found myself settling on a series of questions to help structure conversations with our partners.
Although you may not be operating in a company with an embedded DS org structure, these are still important questions that can benefit any team and company. It can transform a meeting where a stakeholder hands off a to-do list into a conversation. It can also provide structure for conversations where your stakeholder doesn’t necessarily know what to look at. These steps can help strengthen your partner relationships and turn you into a more trusted collaborator. I’ll use a series of examples around exploratory analysis and A/B testing but this could easily be applied to any kind of success metrics your business unit wants to define.
My first exploratory analyses I worked on with our PM was around our search (bar) usage on our support center. This analysis was a multi month project that had a large impact on our product and engineering roadmap. When our PM and I had the initial brainstorming session for this analysis, we asked questions like: How many searches are being done? Are sellers using the search functionality? These metrics were interesting but they weren’t the main question we needed to tell the story and create solutions. The main business question we landed on was a derivative of the primary job to be done by our search tool: connect sellers to the right help content. We translated this to a question of: are sellers finding what they need when they search for something on our Support Center?
This new lens completely altered the analysis design. I came up with requirements around search usage (how many sellers are typing in the search bar) and search effectiveness (what search index of results are they clicking on, what percentage of sellers actually click on search results). Metrics around how many searches were being done didn’t make the cut of the main business question but it was still included in the analysis because it was important for us to quantify the impact of our work. The findings from the analysis were crucial in determining the scope of the rest of the product/engineering work.
This first project with our PM taught me an important lesson, which is to always make sure our partners and I define the main business question for any large project to properly orient ourselves upfront. After we define it, the metrics we choose to look at for the analysis directly answer that main business question. This might seem obvious, but how we shape the analysis completely depends on what we are looking to answer.
The people I work with are enthusiastic, smart, curious people who want to know all sorts of data points. Brainstorming sessions with our PM sometimes snowball into lengthy lists of questions and metrics. Given the amount of prioritization that we do for our quarterly and annual planning, I’ve also found it crucial to be disciplined about the scope of an analysis so that everything we look at drives the most impact. Some of the metrics that are proposed during the brainstorming phase are just “nice to have.” I like paring down a list of metrics by asking: what would we do with this data? This question has multiple benefits; it helps our stakeholders think more critically about their ideas and it also helps me understand things from their point of view. If a datapoint really is important and actionable, I’ll keep it in the analysis scope. Sometimes we’ll modify a metric so that it is more actionable. For example, we are likely going to focus on page views with a 0% page scroll as opposed to page views for one of our support article KPIs. The purpose of the metric is so that it will be a diagnosis; seeing that a support article has a lot of traffic isn’t necessarily a good or bad thing but seeing a high % of page views that do not scroll at all signifies the page could be improved (especially above the fold).
I ran an analysis on our Support Center and one of the questions that came up from our designer and PM was where sellers were coming from (referrer URLs) for different page types. I wasn’t quite sure how this fit into the overall analysis and asked what we would do with the data. Knowing the referrer URLs was important for them to design the best user flow for sellers moving from page to page so I kept it in scope. Our partners are all used to me asking this question now and will often voluntarily deprioritize metrics that they know aren’t going to be the most impactful for future projects. This process is still helpful though, especially when I work with new teams. These teams may be overenthusiastic in identifying metrics or don’t know which metrics to look at.
Planning exploratory analysis before large projects can help orient the direction that Product chooses to take. In the first example I mentioned about the analysis on search, this is exactly what our PM and I decided to do. He knew that our team was going to work on improving the search experience in future quarters but there were still many unknowns about what exactly to improve. The analysis was designed to specifically address a lot of the open questions our team had about how to approach the product work.
I am fully embedded on our product team and I actively participate in all of our project planning and retros so deciding what I work on and when to do it is a very collaborative process. It’s taken practice to step back and take a broader, more strategic look at our team’s projects. Our PM and I inadvertently defined this new working model early on and it’s been very rewarding to see the impact from my data science work. If you work in a company that doesn’t have this embedded model, you can still have these proactive conversations with your stakeholders and see what questions they need answered before important work streams start. I have weekly or bi-weekly syncs with our key stakeholders and we will set aside some time at the end of the quarter (or in our case, a 6 week cycle) to assess what’s coming up and how we can use data to prepare.
Our PM and I have learned how to better collaborate with each other as a result from these tips I’ve outlined. I’m also certain they have saved me time down the road. The additional effort of taking a look at upcoming work from a more zoomed out, strategic point of view and getting consensus on project requirements has paid dividends. I haven’t needed to re-do an analysis due to changing requirements and my finished work doesn’t get left by the wayside. I consistently ensure the project scope is agreed upon and the results are actionable. I usually don’t allow the addition of last minute requirements from stakeholders once I’ve gotten everyone’s sign off. (If you’re wondering how to handle this, I’ll usually propose that we table new ideas for a follow up analysis.)
There are a number of reasons I've enjoyed working and learning with our product team: the wonderful people I work with, the strong partnership I have with them, and my accumulated career experience. I’m grateful that I’ve felt empowered by my manager and other leaders at Square to define a working style that works for all the individuals on our team. All of the tips and examples I’ve shared are from my own experiences here and this is by no means a definitive list. If you’re looking to grow in different areas as a data scientist, feel free to pick and choose tips from here and find your own style. It takes time to build relationships with your partners and it may take several iterations to find something that works.
We are hiring if you’d like to join us!