Routine outcomes measurement and the case for collaboration

Content type
Webinar
Event date

8 May 2024, 1:00 pm to 1:30 pm (AEST)

Presenters

Mel McSeveny, Courtney Jacques, Kat Goldsworthy

Location

Online

Scroll

About this webinar

As providers in child and family services, it can sometimes be hard to find the ‘right’ tool to capture the information you need to report to external funders and internal stakeholders. Some publicly available measures can be expensive, too narrow in scope, or use language that isn’t relevant to service contexts and families. This can often mean service providers use tools that aren’t fit for purpose, don’t collect useful information or aren’t client and practitioner friendly.

This webinar shares an example of how one organisation, CatholicCare Sydney, addressed these issues and developed their own routine outcomes measurement tool (ROM) using research and practice expertise. 

Hear from CatholicCare’s Evaluation Manager, Mel McSeveny, and Practice Manager, Courtney Jaques, as they discuss the collaborative process they used to develop their ROM, how they approached implementation and what they’re learning from the information they’re collecting.

The webinar will also touch on how project staff drew from research and practice evidence in developing the tool and how logic models, culture and curiosity featured in the development process.

This webinar will give you: 

  • insight into what a ROM is, what it can tell you about the people attending your services and the reasons to develop one
  • an understanding of how to develop a ROM
  • insight into how a program logic model can help you develop a ROM 
  • an understanding of the benefits of collaboration to develop a ROM that can be integrated into practice.

This webinar will interest practitioners, managers and evaluators working in child and family services.

Audio transcript (edited)

KAT GOLDSWORTHY: Good afternoon everyone. My name is Kat Goldsworthy. I’m a research fellow here at The Australian Institute of Family Studies, working in the Evidence and Evaluation Support Team.  Before we begin today’s webinar, I would like to acknowledge the Wurundjeri, Woi Wurrung and the Bunurong people of the Kulin Nation who are the traditional owners of the land in Melbourne where I am speaking to you from.  I also pay respect to the traditional custodians of country throughout Australia and recognise their continuing connection to land and waters.  We pay our respect to Aboriginal and Torres Strait Islander cultures and to elders past and present.

Today’s webinar is part of a series designed to share information about the evaluative work that social service providers are undertaking across Australia.  The former is a little bit different to our regular program in that it will be a brief conversation between me and two other guests.  Today my guests are Melissa McSeveny and Courtney Jacques from CatholicCare Sydney, who are joining us to talk about developing and implementing a routine outcome measurement tool.  Before we dive into our discussion though, I do have a little bit of housekeeping to cover.  So, this webinar is going to be recorded and that recording will be available in about two weeks’ time.  So, if you want access to that recording, you can do so by subscribing to the AIFS news, so we’ll promote it through that avenue, or you can access it on the AIFS website under the webinar banner.

There’s also going to be a short feedback survey that we’ll open at the end of the webinar and we really appreciate it if you would take some time to complete that survey, because as evaluators we really want some feedback about how we’re going and whether these webinars are hitting the mark for you, and if there are any areas that we can improve on, we’d love to hear from you.  But now we can begin.  Let’s get into the discussion.  Mel and Courtney, welcome.  It’s really wonderful to have you here.  Thank you so much for joining us.  Could you possibly just start by telling our audiences who you are and what your role is at CatholicCare Sydney are. Mel, I might throw you first.

MEL MCSEVENY: Sure.  Thanks, Kat.  So, I’m Melissa McSeveny, as Kat mentioned, and I’m the manager of Evidence, Impact and Research at CatholicCare Sydney.

COURTNEY JACQUES: I am the Practice Manager for our Children’s Contact Service, which runs a funded contact service in inner Sydney on Gadigal land.

KAT GOLDSWORTHY: Wonderful.  Thank you both.  Now, I invited you both here to talk about some really interesting work that you’ve been doing, developing a routine outcomes measurement tool that’s now being used at CatholicCare I think across several different services.  I’m sure our audience is going to be really keen to hear how that works in practice.  It’s something that a lot of people struggle with or a lot of people are trying to achieve is developing this one tool that they can use across different areas.  So, very excited to have you here to talk about that today.  So, we might start then with you just telling us a little bit about what a routine outcome measurement tool is and what led you to developing something like this in the first place.  Mel, do you want to start that off?

MEL MCSEVENY: Yeah, sure. So, routine outcome measurement is the practice of measuring the impact of services throughout the service or throughout the client journey.  So, we aim to measure how clients are going with particular circumstances in their life when they enter our service to have a baseline measurement.  Then because the family law services that we’re talking about today, some can go for quite a few months or even up to a year or more.  We look to measure those outcomes again throughout the client journey and then again at exit to measure the impact of the service for those clients. 

This project that we’re talking about today, we had to develop our own outcome measurement tool because a lot of the tools that are recommended to use or are being used across other family law services in Australia, are not very specific to the outcomes that we’re looking to measure.  A lot of services might use wellbeing measurement or measurements of psychological distress.  But we really wanted to have a tool that would better reflect the outcomes that we’re trying to achieve in these services, like parents and caregivers, being able to self-regulate and emotionally regulate, ensure that the needs and best interest of children are being heard and prioritised, and being aware of other intervention and support services available to them.

KAT GOLDSWORTHY: What kinds of services are we talking about here that this tool’s going to be implemented in?

MEL MCSEVENY: Yeah, there’s a range of services related to family law, such as family law counselling, family dispute resolution and children’s contact services.

KAT GOLDSWORTHY: Yeah, so imagine that I guess even though there's probably some commonalities shared across those services always going to be unique features and things that you kind of have to get your heads around when you were developing something that would meet the needs of these various services.  I'm going to ask you about how that collaborative work took place just in a moment.  I'm just curious about you mentioned that there are a couple of tools that you had used or that were being used that weren't quite hitting the mark and that was what led to the development of this particular tool.  When you were developing it, so I'm assuming there's a tool that you were using before, were they validated tools and have you incorporated some of that into this more standardised measure that you're using or what does it look like I guess is really my question.

MEL MCSEVENY: Yeah, sure.  So, we started this process in early 2023 and up until then the services had been using the SCORE questionnaires that are required by DSS and looking at different circumstances in our client's lives.  Obviously, the SCORE questionnaire has been developed for a really wide range of services, so they weren’t particularly meaningful questions for our practitioners and clients and didn’t always support case planning or measuring the particular outcomes that we were looking to achieve.  So, we started with collaboratively designing program logics with each of the different services and then looking at the common outcomes between them as well as the outcomes that were particular to each of them.

When we were designing that program logic, we were looking at contractual requirements and client need as well as information from practitioners who are working in the service with clients and research evidence, all those different sources of information.  Then when we were looking for validated outcome measure, there wasn’t ones that were specific to those outcomes.  So, we actually looked at a tool that had been put together as a kind of conglomeration of a range of validated tools and had been tested and evaluated in another service in Victoria.  We read the evaluation report from that to look at feedback from the services and incorporated the parts that had worked, and then negotiated which items we’d keep, change, or needed to add in from there, again, with the services. So, really collaboratively designing those routine outcome measurement tools.  And the routine outcome measurement strategy as well, looking at when best to implement those tools throughout the client journey depending on the different services.

KAT GOLDSWORTHY: Courtney, I’m keen to hear your thoughts on this whole thing. You’ve come in from that practice I guess background and perspective.  What has your involvement been in the development of this tool and what are your general thoughts I guess on it?  How’s it been?  

COURTNEY JACQUES: Yeah, so the whole process of developing a routine outcome measurement was a really positive process for our practice managers and also for our team.  It was a meaningful process because it started as Mel mentioned, started with the development of program logics for each of our programs, which was a new way of thinking for many of our practitioners.  But something that generated a lot of energy for our practitioners because in the thick of service delivery, we’re not always pausing to think about what are we achieving in the short term, medium term, long term, and how are we showing that what we are doing is effective.

So, to have the support of Mel and her team to take time to explore those questions and come up with a program logic, really generated some energy and then some excitement to work out ways that we can demonstrate this in a formal way by asking our clients meaningful questions at the start of service delivery during service delivery, and at the end, and really demonstrating what impact we might've been having.  So, it was a process of lots of conversation, lots of negotiation, lots of feeding back and forth and problem-solving.  It was a really collaborative process and really supported positive teamwork.

KAT GOLDSWORTHY: So, from your perspective do you feel like there was a need to have a tool like this in place that there were some systems in place or measures in place that maybe particularly weren’t working or weren’t capturing the data that you wanted and maybe were a bit clunky to implement?  So, were you hearing that on the ground and from your perspective there was just this real need and desire to do something like this or was there a driving force coming from a different avenue that you were then kind of brought on board to manage the process? I’m just curious to hear what the driving forces were.

COURTNEY JACQUES: Yeah.  So, I think it’s no surprise to our practitioners that there’s been a conversation over the last several years around our funders being much more interested, not just in what we’re doing and our output, but much more interested in the actual impact that we’re having and wanting services to be able to demonstrate that, that the funders are getting value for money really, and that we’re doing what we’re funded to do, that we’re doing what we’re saying that we’re trying to do.  So, there was this understanding that this was where programs needed to go, and then it was a really – so, the discussions were – sorry, I’ve lost my train of thought.

KAT GOLDSWORTHY: That’s okay.  I was really just wondering about – I mean, it sounds to me yeah, so the external reasons as to why you’re doing this, but then you’re also talking about going through this program logic process and that being a really energising process.  I mean, I maybe shouldn’t be surprised with that, but I think it can be this way of coming together and representing all of this great work that people are doing.  Then obviously you’ve got this arm that Melissa comes from which is making sure that it's evidence-informed and you're kind of having that lens on it as well as I'm assuming the experience of practitioners and having the reality of the program reflected in that program logic model.  So, I mean was that an unexpected outcome of that process?

COURTNEY JACQUES: Yeah.  I mean, definitely. If you’d asked me two or three years ago whether I would be saying that I was excited about routine outcome measurements, I don’t think I would’ve believed myself.  But I think our practitioners have always been curious and had ideas about the impact that we were having, and how clients were finding our service at the end of the process.  I think there was a real desire then when we were invited to co-develop these routine outcome measurements, a real desire to be able to get some real data that we could then lean on to see how clients have experienced our service and whether we’ve improved safety, for example, or whether parents are feeling more confident to be able to use communication strategies to reduce conflict, that kind of thing.

KAT GOLDSWORTHY: Yeah, and I want to come back to some of the results and the data that you’ve been getting now that this measure, this tool has been out in the field for several months now.  But I’ll come back to that a little bit later.  I’d really like to hear more about, you guys both have mentioned that it’s been a very collaborative process, lots of different people involved.  I’d like to just hear a bit more about that process I guess, what were the steps that you had to walk through in order to reach that, achieve that end result of developing this tool?  You’ve had the program logic part in the middle, but Melissa, maybe you could just talk us through that process a little bit, if you wouldn’t mind.

MEL MCSEVENY: Yeah, sure. So, we wanted to make an effort that the expertise that we engaged to inform work were from all those different perspectives.  So, we didn’t want to just make it a desktop research project.  So, we collaborated from the beginning with practitioners, practise manager, and the director of family law services to make sure that we could come up with an evaluation strategy with routine outcome measurement that wasn’t additional to the work already being done by practitioners.  So, after we’d worked with the practitioners to develop the program logic ensuring we had their input around what they thought were important outcomes for the program and what outcomes were important to measure and could be measured.  We wanted to make sure that the tool we used could also be helpful to inform case planning and would be meaningful for clients as well.

So, I think the most in depth and longest part of the process was around wordsmithing and including and excluding items.  So, we did take some items from validated tools when we were developing this tool.  But we really wanted to make sure that they were strength based and that parents and carers and clients would not be put on the defence when they were responding to the questions, and that it would help support a relationship of trust between the client and the practitioner.  We know that often clients in family law services for many reasons might not be sure who they can trust or what information to give.  But we wanted to build that relationship of trust with these tools as well.  We didn't want to over-survey clients or increase the administrative burden for practitioners as well, so we were really careful around deciding how many items would be in the tool.  We didn’t want to add in questions that might be interesting but not meaningful or useful.

KAT GOLDSWORTHY: How long did it end up being, can I ask?

MEL MCSEVENY: Yeah, sure.  So, from the beginning of developing program logics with each of the teams, through to looking at a routine outcome measurement strategy, building the tool together with the practice managers and then supporting the teams with capacity building to use the tool and then implement it, and of course working with our systems team to ensure our IT system, our client management system could support those tools, was around nine months.

KAT GOLDSWORTHY: And the tool itself?  How many items did you include in the measure?

MEL MCSEVENY: So the initial tool that we use at intake has around 13 questions.  They vary from the type of client and the service, but around 13 questions for that initial tool, and then throughout the client journey and at exit we’re also asking some additional questions around client experience and satisfaction.  So, there's an additional six questions around their experience.

KAT GOLDSWORTHY: That length in the process that you were working through, that was deemed acceptable practically?

MEL MCSEVENY: Yeah, that's right.  So, there was lots of weighing up of pros and cons of different tool lengths.  We know that there some quite short tools like the PWY for example.  But again, the shorter the tool is, the less specific it is, and then the longer tool is the more specific it is, but the more of an administrative burden it is and the more it impacts the relationship between the practitioner and the client and impacts how many clients are happy to complete it and things like that.  We negotiated, I guess, between an evaluation perspective and our practice perspective, how long the tool should be, and which items were most meaningful and should be kept in.

KAT GOLDSWORTHY: I interrupted you sorry, before about when you were talking through that process.  I think we got to the point of wordsmithing and tweaking the number of items and then I think there are some other stages that kind of happened after that, weren't there?

MEL MCSEVENY: After we’d decided on what the tool would look like, we then moved to capacity building, I guess, and cultural change for the teams to start to implement the new processes.

KAT GOLDSWORTHY: That's always the fun bit, isn’t it?  How was that experience for you, Courtney?  Was there a lot of change management that needed to happen or was it easier because people had been involved from the start?  Maybe I'll just back for a second.  Who within the service delivery was involved in the collaboration process?  Can I ask that, firstly?

COURTNEY JACQUES: Yeah.  So, each of the practice managers of each of the programs were involved in developing the tool.  We didn’t want to overburden practitioners of the work of doing the development.  Knowing that they already have a lot on their plates.  Throughout the process my practice manager colleagues, and I were keeping our teams informed on what was happening and where the process was up to and building some positive expectation.  A lot of encouragement and reassurance that this was being developed carefully and with much thought and that we were thinking about things like the impact on the therapeutic relationship for example, and wanting to help it, not make it worse.  Also, being aware of any additional administrative burden on practitioners and being really mindful of again trying not to make it too much more than what they were already managing.

So, yeah, it did help that practitioners were involved from the start around the development right from the development of the program logics.  So, when it did come time to implementing or trialling implementation, I think it started with one or two practitioners from each team, just to start to try out the tool, so that change management process could be – if there was any glitches that were picked up that wasn't being experienced by the whole team all at once.  It did require some effort and work from some practice managers to ensure that teams were on board to respond to any anxieties or concerns.  I think the process was overall quite seamless and quite positive and it was really important for us to be explaining the why behind the what, and how this would benefit clients, how this would benefit their own practice and their own work with clients as well as benefiting the program and the organisation with how we’d be able to use the data once we got it.

KAT GOLDSWORTHY: How – sorry.  Go, Mel.

MEL MCSEVENY: Sorry, I was just going to add to that, although I was involved in supporting the teams, the practice managers were real champions of the work and kept that cultural change – having a really positive focus on the benefits of the tool and also keeping the practitioners very solutions focussed, so that if there was any challenges the practice managers were great at asking the staff to think about solutions so that their colleagues could learn from them as well.  Which I think was an important part of the implementation to make it successful.

KAT GOLDSWORTHY: Yeah, because it’s something that I'm always really interested in, and that I'm continuously learning about, is how to create that buy-in at the level that the practitioners who are ultimately responsible for collection of data in most services.  But it sounds like you’re having a lot of conversations.  There was obviously a capacity building element around the implementation of this tool and a lot of peer sharing and learning.  Was there anything else involved in this process of trying to create that buy-in?

MEL MCSEVENY: I think being able to show examples of data coming in from routine income measurement tools.  So, before we had data from family law services, I could show what was happening in other services, that had implemented tools to show how the information would look and how the data would look when it came in, and then talk about how that data would be useful.  Because that data is being translated into our reporting through the DECS framework.  It’s been translated into the score framework to be reported through DECS.  So, it helps us to meet contractual requirements, it helps to make transparent the impact of the work that we’re doing and we know that it will support a whole range of business sustainability really, around funding in contracts and staff development and all sorts of clinical governance, risk, all of those things that we can use the data to inform us.

KAT GOLDSWORTHY: It sounds like you’re being supported by a positive culture anyway around evaluation.  There are multiple levels in which these things are being reinforced through and then you've got these feedback loops as well, which can be, I know just from hearing from others can be incredibly powerful.  We’ve only got a few minutes left, so I really just want to hear a little bit about what you’re seeing now that the tool’s been implemented.  It sounds like there was at least a small process where the tool was tested by some of the practitioners in some of the service areas and maybe there were tweaks being made.  But what happened through that implementation process and what kind of results are you now – I mean, I don’t if you’ve had a chance to really analyse the data in any great detail, Melissa.  Is there anything that you’re learning from the information that the tools now collecting and giving you?

MEL MCSEVENY: So, we are still looking at the best ways to analyse the data.  Unlike financial data for example, that comes in, you can look at the raw data and it’s quite indicative of what’s happening.  Often routine outcome measurement data, it’s like a weathervane.  It points you in a particular direction, but it often raises questions that need further investigation.  So, that's what it’s pointing us towards at the moment.  We’re having a look at the raw data, making sure the tools are being implemented the way that they’re meant to be implemented and that the data’s coming in correctly and then looking at questions we might want to ask next. Which is obviously very useful when we’re looking at ongoing evaluation for the service.

KAT GOLDSWORTHY: Courtney, how’s it been from your perspective, the implementation side of things, have you learnt anything through that process?  How’s it going?

COURTNEY JACQUES: So, the feedback from practitioners so far is generally positive and it seems like clients are engaging positively with the tool and agreeing to fill out the forms and finding it meaningful, which is helpful.  I mean, it’s an ongoing process and ongoing tweaking, things still to be done.  Even this week we’re getting some feedback from some practitioners that certain clients because of their circumstances may not be finding some of the questions relevant to them, so they’re struggling to answer.  So, that's literally a conversation we’re having this week and next week around what to do with that and working out how widespread that issue might be and what’s some possible solutions are.  But overall, it’s been a really useful and positive process, and even though it was quite a bit of work to get it built and get it started.

KAT GOLDSWORTHY: That is not insignificant, is it?  You’re obviously trying to implement something that can be easily used and it’s going to give you meaningful data about your services, which is a really great initiative.  It then sounds like there's potential for that to be refined as you go, because you've got this great evaluation unit at Catholic Care Sydney which you can work together on and refine as you go.  There's now great partnerships that have been established.

MEL MCSEVENY: As an aside, it’s been a positive cultural change, I think, to share learnings between the practitioners and the evaluation team.  We’ve been able to share the importance of an outcomes focus and the evaluation team has been able to learn a lot more about the details of the service and the outcomes being achieved and the relationships between the clients and the practitioners.  

KAT GOLDSWORTHY: Yeah, I mean that makes sense.  I was kind of thinking when you were talking, Melissa, before about the process, that kind of iterative process that you were going through to design the tool and you were talking about wordsmithing, and coming up with the right number of items to include.  What does that look like?  Is that as an evaluator, you come, and you go right these are the kinds of questions?  We’ve pulled these things from these different tools, and these are the kinds of question we think you should be asking, and then it’s a matter of actually the people working in these service areas, tweaking them and going back and forth.  What does that look like?

MEL MCSEVENY: A series of meetings between three or four practice managers, the director and myself, talking through individual questions and thinking about what the possible outcomes or reaction or responses might be from clients in responding to those questions.  Keeping in mind that although the adults are our clients, many of the outcomes that we’re looking to achieve are actually for the children.  So, how do we get indications from adults on a regular basis around what might be happening?  How do we gain insight to what might be happening for the children through the parent carers' eyes?  So, lots of gathering various perspectives on the wording of the questions and negotiating what would be most suitable to keep in. That probably took eight weeks, I think.  Refining the wording of each of the questions.

KAT GOLDSWORTHY: Yeah, it’s an incredibly important thing to get right, isn't it?  It can be hard to please everyone.  In my own experience, it can be hard to please everyone on that front.  Particularly when you are, as you said before, you're trying to maybe cover off on a lot of different things, with a couple of questions potentially. That can be really challenging to do.

MEL MCSEVENY: Yeah, that's right, and making those decisions around what do we need to routinely measure with these tools and what can be measured or investigated with a more in-depth evaluation periodically.

KAT GOLDSWORTHY: Off the top of your head, do you know what you decided?

MEL MCSEVENY: So, feedback from children, for example, so gathering that insight from children and young people.  As I said, they’re not our direct clients in a lot of cases, but obviously it’s vital that we have information from them.  Although we can routinely gather from them insights into client experience or satisfaction, we will only periodically get evaluative information from children that will be separate to the routine outcome measurement process, because we know that routine outcome measurement tools are often not reliable with children and young people.

KAT GOLDSWORTHY: Yeah.  I guess, it’s the point of - it is about monitoring what’s going on, having separate evaluation process, so a different function and a different objective.  You often have to have multiple things going at once to be able to answer the various questions that you've got.  In the meantime, in between doing proper evaluations, it makes sense to have something like this in place where you can monitor and track the key things that you try and do across your services.

MEL MCSEVENY: Yeah, that's right.

KAT GOLDSWORTHY: Well, I think we’re going to have to leave it there.  I've probably already gone a couple of minutes over time, just because I could genuinely ask you questions about this all day, as you probably have realised.  Thank you.  Thank you both so much for joining us.  I really appreciate you just sharing your experience and your insights.  I think this will be really relevant to a lot of work that other people are doing.  No doubt you’ll have people contacting you, asking you specific questions about how they could do something similar to this in their organisations.  So, thank you very much.

MEL MCSEVENY: Thank you.

COURTNEY JACQUES: You're welcome. Thank you.

KAT GOLDSWORTHY: Thanks, Courtney. I’ll also just thank our audience.  Thank you everyone who has joined us online.  We really appreciate you coming along and trying something a little bit different, a little bit new that we’re offering.  A big thanks to our comms team who are doing all the amazing work behind the scenes to produce these webinars and make them happen.  Incredibly grateful for that support.  So, please subscribe to AIFS newsletter if you want access to the recording and if you have time, complete the feedback survey when that pops up at the end of this webinar.  Because we absolutely value your feedback and are always wanting to learn about how we can do better. Thank you.  I’ll leave it there. We look forward to you joining us at our next webinar. Take care. 

Presenters

Mel McSeveny

With experience in the not-for-profit sector as well as local and state government, Melissa has led strategies to deliver infrastructure to support resilient communities, guided complex policy development projects related to the provision of homelessness services and prevention, and collaborated on early intervention programs for families and communities. In her current role at CatholicCare Sydney Melissa is a facilitator between the theoretical world of research, evidence and best practice and how that is translated into impact measurement, service design and improvement.

Courtney Jacques

Courtney Jacques is a social worker and manager of CatholicCare Sydney’s Children's Contact Service, which has an integrated, therapeutic case management model of practice. Courtney is dedicated to operating a contact service that is innovative, trauma-informed, and engages in cross-sector collaboration.

Facilitator

Kathryn Goldsworthy | Senior Research Officer, Evidence and Evaluation Support

Kat Goldsworthy works in the AIFS Evidence and Evaluation Support team which specialises in strengthening evaluation capability across child and family support services. Kat is knowledgeable and skilled in designing and preparing program evaluations, developing program theory and logic models, collecting and analysing qualitative data, communicating evaluation results, research synthesis, knowledge translation and group facilitation and training. She has worked in government and not-for-profit organisations for 15 years in roles related to employment, health and community services.

Kat is passionate about creating and sharing knowledge about programs and practices that can positively benefit Australian families.

Share