Using NING in evaluations

We’re all familiar with this problem in evaluation: whether you’re using 1st, 2d,3d, 4th or even 5th generation evaluation methodology – there are always cases where an organisation works in different continents, countries, or even regions in a large country, and you can never physically visit all the projects – the evaluation would become more costly than the actual project sometimes….

I’m using NING (or nowadays trying ELGG, like on our own evaluation-5 platform) with different methodologies: Most Significant Change (for publishing the stories on-line and discussing them in a forum), but also to invoke comments on a case study, even for stakeholders’ meetings in Fourth Generation evaluation.

You need to be quite doggishly chasing people to contribute sometimes (and that’s an understatement), but compare it to all these ours in planes, trains, cars, waiting in lines, waiting for latecomers, etc. etc….

In the end it is almost always rewarding.

If you go to you can open your own little virtual corner on the web for free (although you have to accept google adds – the business model of many of these services). It shows itself. ELGG is a bit more difficult, but that’s why I’m experimenting with it still.

NING even has possibilities to form internal groups, chat, post documents, open or closed membership, you name it. Even working like in a WIKI (working together inside a document) is possible with an add-on now.

The first time I organised a group chat it was no success though: it was in a ‘Most Significant Change’ process: we had gathered stories from 50 people throughout the programme and they had all been published on the NING on the Forum page (we even did receive some comments….). The next stage in MSC is to discuss the stories in a subgroup (like one countries’ stories, etc.). The final stage is a group on the level of the organisation itself, discussing all the outcomes of the different subgroups and always based on the question ‘what are the most significant stories about this project!’

The idea was to have the country discussions through a (written) chat session: the chat was supposed to take place on a Friday afternoon and I opened my NING in time, announcing by email, that I was there online and we could start. At least that’s what I thought! Apart from my colleague evaluator in South Africa simply no one came on line!

So much for the chat option!

Anyhow we soon found out, that chats via Skype work a lot better (don’t ask me why!) and after we put a real prize at stake (in this case people could win the possibility to support a project of their choice in the 1% club…)  we got people voting, and they even came on line in order to hear who won the prize….

It’s nice to use the new generation social networking tools, but you need to accept that everyone uses them on their own time… Group chats are not in fashion anymore nowadays, unless you add some extra’s…

Next time: ELGG, for free as well (open source software) and no adds….

Logic or partners?

Apart from conducting evaluations I do also advise (International Cooperation) organisations in the development of their monitoring and evaluation systems, or better said: their quality systems.

In that work I encountered an interesting phenomenon: more and more I work with Southern organisations that (to a large extent) master the logical framework: of course not in the least because that is by and large the mainstream approach preached by the (Western) donor community. Their  mission and vision are clearly formulated, their overall goal, intermediate objectives, expected outcome, matching activities, indicators: they are all there.

Still they have problems in Monitoring; let’s say  daily feedback (on what they have done and reached) and reflection on what will be the next step. That made me wonder how this logical framework thinking leads to building thoroughly developed projects and programmes, but at the same time lacks ‘handlebars’ for the daily practice. Of course I have used the logical framework approach for years (I even taught it to a lot of newcomers in development cooperation!), I know how to build problem trees, solution trees, the framework itself, and that even in a participatory way, like the German GTZ once developed. I like the fact that the approach forces you to think about the links between your overall mission and how you want to reach it, which steps are necessary, etc. I saw in this case (a month ago while I was in West Africa) that the organisation was really using it, their planning was firmly based on 3 ‘specific objectives’ , they wrote reports based on achieved results, but yet they seemed not to be able to link all these things to day-to-day practice and take simple management decisions, or organise less simple regular reflection around their achievements and ‘next steps to be taken’: desk officers were working hard in executing projects, building new projects in line with the ‘holy trinity’ of objectives, but it all did not lead to their satisfaction and synergy in their organisation.

It took me quite a while before I started to ‘see the light’ after all the discussions I had with various people there… But I think in the end I found some leads.  I saw that the systematic Logical framework approach forces people to think about the obvious programme logic but far too little about the programme’s partners and neither about the programme’s outcome (results as obtained by and ‘in’ the partners)… I started looking back to the various evaluation approaches I had used before and then it dawned to me, that specifically the Outcome Mapping approach was what this organisation needed as a counterweight to their (all too?) logical thinking!

Outcome mapping has been developed by the Canadian organisation IDRC, in order to solve the dispute about obtaining impact and, moreover, the attribution of impact to a certain programme. We all know it’s difficult, even only possible after quite  a few years and the longer you wait, the less you can attribute it to a certain intervention…. So it becomes pretty hard to establish exactly what evaluations are supposed to establish!

Anyhow, the Outcome Mapping approach deals effectively with this issue by acknowledging this fact and directing us to (Boundary) partners and the outcome (defined as the change in behaviour of these partners, who are expected to contribute to the overall goal in the end). That is a radical step, and first and foremost it was – in my case – a relief for my partner, the West African organisation, because they could now focus on both (boundary) partners and outcome, instead of  trying to establish that they reach (too) abstract or faraway goals / impact.

We are now in the middle of a process, based on the outcome mapping theory ( see to develop a list of partners, through which the different programmes want to reach their goals, and defining a list of outcome challenges, as well as ‘graduated progress markers’.

Of course the outcome mapping system is closely ‘knit’ to the logical framework approach and in itself not a paradigm change (as are Fourth generation Evaluation and Most Significant Change), but I can tell you: it works well in practice with organisations that are used to the Logical Framework Approach and it is a lot more practical, yielding results in PME: Planning, Monitoring (and in a later stage I hope to let you know more about Evaluation as well!).

Bob van der Winden

Total make-over

It’s not easy to evaluate an organisation that is in the middle of a ‘total make over’.

Still, thanks to the intense cooperation of many involved stakeholders, this became a very worthwhile operation, in the first place thanks to the transparent and co-creative atmosphere the evaluation was conducted in.
The South African Women’s fund in case was created in 1998 to ‘support local women, finding local solutions for local problems’ by providing grants to grassroots women, individuals and groups to capacitate them to come together and create their own economic and social justice and especially to reduce the gender based violence which is entrenched in these communities.
In the beginning of 2009, after the dismissal of then director, the board realised they were not on the ‘right track’ and decided to bring in a new director asking her to reform the organisation so that it would be ‘state-of-the-art’ again.
The evaluation took place between October 2009 and February 2010 and its co-creative form fitted the organisation well: after some joint training with the grants’ staff they conducted interviews (in the Most Significant Change style), later scrutinised by myself as well, and these interviews were discussed with the whole staff and the board. The form of the rest of the evaluation was mainly topic-interviews and stakeholder meetings according to the ‘4th generation evaluation methodology’. I became rather a facilitator in the process, but nevertheless the final responsibility (e.g. for the conclusions and recommendations) is always mine. The evaluation was done by me and was immediately followed by a strategic planning, facilitated by Rosien Herweijer.
The main research question was: “How and to which extent has the fund reached its objectives in the past concerning the 4 areas of intervention: basic grants, seed funding to CBOs, women in leadership, discretionary grants, awards?”
The evaluation used 5 different ways (triangulation): a desk study, topic-interviews of stakeholders, gathering stories from grantees, participative observation and a total of 4 workshops with different stakeholders. This all took 4 working weeks (apart from the work done by the staff itself) and flowed into the strategic planning.
The so called Claims, Concerns and Issues (CC&I) were discussed in the final stakeholders’ meeting. The Claims, Concerns and Issues are the ‘informing principles’ during this whole evaluation. In them the evaluator initially constructs the good things of the organisation (the claims), but something is only a claim if there is consensus among the stakeholders (in the meetings representatives of staff, board and grantees, as well as sister organisations were present). Concerns are the things everybody agrees upon that need to be solved, ISSUES are all the other findings from the evaluation about which there is no consensus. CC&I were validated in meetings with the board, staff and other stakeholders of the organisation and later again in the strategic planning that followed the evaluation. Conclusions and recommendations iin the final report are then the evaluator’s interpretation of those CC&I.
The organisation is at the moment of writing of this report halfway a strategic planning process, based on the findings and recommendations of this evaluation. The practical model used is a business plan, sorting out all customer segments and the value propositions made to them, rather than a theoretical model, or a too rigid model (like the log frame) that puts operations in a central place, rather than stakeholders’ groups. The basic business model has been built; the challenge is now converting it into a strategic plan (on paper). Future annual plans will fit into this strategic plan.
This way it became possible to adapt different assets from our toolbox to the needs of the organisation, and sofar you can say that right now, it was an experience that will later also allow the staff to build on it, not only in the form of the strategic planning, but also for future monitoring, etc.