Before proceeding, let us take stock and review what we have learned from our journey thus far.

The preceding chapters underscore the imperative of listening to MSMEs to understand their social capital. Holding the premise that MSMEs themselves are the ones who knew, felt, and experienced the conditions under which they were operating, the approach we used to capture the type, nature, and intensity of the social capital was to rely entirely on their perceptions and responses toward a set of questionnaires used in the survey. The questionnaires themselves were composed and selected based on what the MSMEs deemed important, broken down into objectives, constraints (challenges, problems), and alternatives (encapsulating the SC-compatible policies).

Using the AHP/ANP, the elements of objectives, constraints, and policies were then ranked according to what MSMEs perceived as most preferred to improve their productivity. This policy ranking was treated as the “messages” sent by agents in the MDT framework, and evaluated by the monotonicity test to address the issue of implementability. As described in the last two chapters, the results show that MSMEs overwhelmingly favored having a network with other stakeholders. The preferred pair in policy-mix also constituted having a network. After conducting the monotonicity test, these preferred policies were found implementable.

Fig. 4.1
figure 1

Organization of this chapter

There are a number of issues relevant to the above findings that need to be clarified. First of all, to the extent the primary inputs of the model used to derive those findings were not in the form of secondary data, instead, the perceptions and judgments of MSMEs, one needs to define what constitutes perceptions and judgments. Secondly, what we did in the previous two chapters was to use the AHP/ANP-based ratio scales for conducting the MDT-based monotonicity test. To our knowledge, this study is the first to use such an approach. Most studies used either simple or other standard scales such as those based on the Borda count. In this context, one may wish to compare the results of this approach with those using a simple ranking that ignores the intensity of perceptions. Another major issue concerns the gap between MSMEs preferences and those of social planners’ toward policies deemed appropriate to help improve the MSMEs’ productivity. How big was the gap, and were there any implementable policies or policy-mix that could align MSMEs’ and social planners’ preferences with the social choice function (SCF)? These are issues addressed in this chapter.

Figure 4.1 depicts how the chapter is organized. Before comparing the results of monotonicity test using simple Borda-based versus AHP/ANP-based ranking in Sect. 4.2, we begin in the next section with some important points regarding perceptions and judgments. The last section discusses the difference (gap) between the ranking of MSMEs’ preferences toward implementable policies or policy-mix and that of social planners’. This issue is important as it could explain why some of the implemented policies have not been effective. In the discussions, we also address the question of mechanism that could be used to align the different preferences and show such a mechanism.

4.1 Perceptions and Judgments

It is clear that the whole process of “listening” before arriving at the preferred policies and policy-mix began with capturing MSMEs’ perceptions as reflected in their judgments when filling in the questionnaires. Indeed, from the early stage of developing the questionnaires to the late stage of obtaining the ranking of SC-compatible policies, we relied on MSMEs’ perceptions. These perceptions played a fundamental role in our attempt to understand MSMEs’ mental bandwidth.

Why judgments and perceptions are so important for preference ranking? We know that judgments and perceptions are fundamental in decision-making. Given the information that we have and the circumstances we encounter, we make decisions based on what we perceive to be the main issues and problems to be solved. Likewise, making priority ranking is based on perceptions and judgments.

figure a

Survey story: A pre-survey interview was conducted in one of the wet markets in West Jawa to ensure that any issues to be raised and questions to be asked in the follow up questionnaires reflect the real concerns to MSMEs, not what we thought are important. To capture the true feeling, perceptions and judgments of our respondents, we tried to be fully engaged in a conversation with them, without talking much We strongly believe that we pay a particular attention on what we say when we listen. To the extent listening is a sign of respect, the way our respondents perceived respect, first and foremost, was when we did deep listening, not just waited for them to stop talking

But what constitute perceptions and judgments, how the two are related, and how do they affect people’s preferences? When we are about to make decisions or choices of some alternatives, we encounter stimuli that cause us to feel an emotion. Through a set of processes, these stimuli produce sensory impressions that take the form of sensations, from which certain information are transmitted. The process consists of a sequence of steps that involves selecting, organizing, and interpreting those information. It is this process that forms our perception. To make sense of all the stimuli, we use this process/perception.

figure b

Survey story: Most of the coffee production in Rejang Lebong, Bengkulu, is produced by smallholder farmers in rural areas, distant away from the processing industry. This made the transport and logistical costs high, and the coffee sold unprocessed. Over the last few years, their production has been significantly impacted by the climate change, for which they need information about the climate-smart cultivation techniques and better skills to deal with climate change. While this is more so for the Robusta coffee farmers, network of marketing is particularly needed for the Arabica coffee farmers (confirming that different ranking of problems implies different ranking of preferred policies)

Together with thoughts and feelings, perception constitutes the mind. It is how we take in—and make sense of—information or a situation. The way we evaluate this information to form opinions or to react is called judgment. Both perception and judgment are mental processes: the former reflects—to be followed by—the latter.

Judgments themselves can be influenced by experience, available information, and the desire to accomplish something. They also depend on our feelings and ability to interpret the information, implying that they can evolve over time depending on the learning process and changed circumstances. The comparison of survey results before and after COVID described in Chap. 3 is an example of testing the effect of changed circumstances on MSMEs’ perceptions and judgments, although after conducting the monotonicity test the preferred policy before and after COVID turned out to be the same.

The mechanism from “listening” to making the priority ranking was driven by the respondents’ thoughts and feelings, the creation of which involved both physical and non-physical phenomena. The physical part comes from the fact that thoughts and feelings arise from the electric firings of neurons inside their brain. The brain itself is made up of energy and matter, and the firings of neurons share with the electrical vibrations, which is a physical world phenomenon. Hence, this part must obey the laws of chemistry and physics (Saaty, 2010). The communication between neurons and nervous system is facilitated by the neurotransmitters (such as serotonin and dopamine), chemical messengers that regulate behavior and judgment through various psychological functions, including mood, emotion, stress response, and cognition.Footnote 1 These functions obviously affected the way respondents answered the questionnaires in our survey.

While the brain and the firings of neurons is a physical world phenomenon, the synthesis of the firing signals produces non-physical properties not found in matter and energy of the brain. It is this synthesis where perceptions (along with thoughts, feelings, and judgments) belong to. The process of preference ranking is governed by both the physical laws of nature and the behavioral laws of psychology.

Hence, although respondents’ mind originates in the organic matter of the brain, its operations must follow not only the laws of the physical world but also the principles that regulate their thoughts, feelings, and judgments. How the ranking is made depends on how respondents synthesize the signals emitted by the firings of neurons.

What if the problems we are working on contain several layers of sub-problems that are interdependent and too complex to be used for making the ranking (e.g., involving too many trade-offs)? Also, what if the components in the problems that require preference ranking are viewed differently by different respondents, each of whom answers the questionnaires with a different degree of consistency?

First is the issue of complexity arising from multiple sub-problems. Making priority ranking in a complex problem must consider all factors from different angles, each with its own result of preferences. The influences of those factors also need to be considered before combining the different preferences of each to yield the ultimate ranking. As shown in Chap. 2, this was done by developing a structure in which all relevant factors and their interdependent influences were included. Such a structure helped us understand the complexity of the problem better before we measured the relative strengths of the factors and their influences. In cases where alternative policies could influence other factors, we used a network structure as in Chap. 2 of Sect. 2.2. Otherwise, we used a hierarchy structure as in Chap. 2 of Sect. 2.3.

The second question is concerned with the group decision. To reach a consensus among members of a group is always a challenge. The “matching” between particular issues or problems that we deal with and the selected respondents who literally face those issues is critical in this context. In our case, either the owners or the operators of MSME themselves, not some proxies or third parties, were selected as direct respondents. It is our principle that they should be the ones who answered the questionnaires because they were the parties who knew, felt, and experienced the problems and issues surrounding their operations. This principle meets the prerequisite for finding a consensus in a group that requires careful consideration by members who are knowledgeable and informed about the activities they do. A cooperative decision is easier to reach when people have mutual knowledge and understanding.

As to the issue of averaging or aggregating the ranking made by a group of respondents, it is important to note at the outset that making collective decisions should ideally avoid using voting without considering the weight or intensity of respondents’ feelings. The AHP/ANP approach specifically measures such weights by using the ratio scales derived from pairwise comparisons and uses them to generate the ranking. In aggregating the ranking, it has been established by many studies that the weighted geometric mean aggregation is superior to the weighted arithmetic mean and other methods of aggregation. For one, with geometric mean the reciprocal of the synthesized judgments is equal to the syntheses of the reciprocals of those judgments. Arithmetic mean does not meet such a condition. The only aggregation procedure that satisfies the AHP properties, i.e., separability, unanimity, homogeneity, and the reciprocal property, is the geometric mean (Aczel & Saaty, 1983).Footnote 2 It has also been shown that global priorities of alternatives obtained by the weighted geometric mean aggregation are invariant under the normalization of local priorities of alternatives and weights of criteria (Krejci & Stoklasa, 2018). Notwithstanding the foregoing conjectures, in many cases, individual respondents may not want their judgments to be combined except their final outcome. Hence, in our analysis, we only took the geometric mean of the final outcomes.

On the consistency of ranking, due to the inevitable inconsistency among the judgments (human mind is inherently inconsistent), it is necessary to derive the preference ranking that falls within an acceptable level of inconsistency. As described in Appendix B, this could be done by using the principal eigenvector of a matrix of pairwise comparisons. It is shown that such a matrix is consistent if and only if the maximum eigenvalue equals to the number of compared items (\(\lambda _{max} = n\)).

Another important property for consistency of judgments is related to the concept of closeness and dominance. Consider the case where we have to rank a set of policies based on their net benefits. Suppose the net benefit of policy-1 is higher than policy-2 by 5 million rupiah, and policy-2’s net benefit is valued 3 million rupiah more than that of policy-3. According to metric topology, policy-1 has a greater net benefit than policy-3 by 8 million rupiah. We can also have a case where policy-1 is five times preferable than policy-2, and policy-2 is three times more preferred than policy-3. According to the topology of order, the preference toward policy-1 is 15 times larger than that toward policy-3. That is, policy-1 dominates policy-3 fifteen times. It is this dominance—rather than closeness—property that is essential for the consistency measure. Hence, we used the topology of order in our survey, which did not require any unit of measurement (dimensionless) as commonly used in physical sciences. From the perspective of the possibility that respondents’ preferences may change due to the emerging new information or circumstances (e.g., the COVID pandemic), allowing inconsistency is important. Continued revisions of understanding and judgments are needed as people’s experiences may change over time. There is, however, a limit to the degree of inconsistency for the survey results to be used. As explained in Appendix B, the limit is 10%. We used that limit in our survey.

4.2 Reassessing the Ranking

As discussed in Chap. 3, the two key concepts to evaluate the implementability of policy choices in MDT are the revelation principle and the implementation theory. While, in direct mechanism, we could ask/identify specifically the true state/profile of MSME and ME when revealing their preferred SC-compatible policies, in indirect mechanism the only information required are the ranking of policies. Either way, information about the ranking are needed to identify the optimal choice of policies under different state/profile.

The problem is that the resulting ranking could be different when we use different approaches. In the Borda count case, given alternative choices the ranking is done simply by looking at the relative position of each policy choice. Under a particular state/profile where there are four alternatives and two players (\(k = 4, n = 2\)), we assign four for the top-listed alternative and one for the bottom-listed alternative to one player. We do the same thing for another player. We could then select a pair of same alternatives for both players based on the largest points.

Referring to the example of an implementable case in Chap. 3 of Sect. 3.1 (reproduced below), in state/profile \(\theta \) the alternative \(x_2\) has the most points as it is ranked second by both MSE and ME. Hence, 3 is assigned for each, and the sum equals to 6 which is the highest among the sums of all pairs. Similarly, under \(\theta '\), the total sum of the assigned numbers for \(x_1\) is the highest (6). Hence, \(x_1\) and \(x_2\) are the optimal choices under \(\theta \) and \(\theta '\), respectively (Table 4.1).

Table 4.1 Example of mechanism design: Non-violating case from Chap. 3
Table 4.2 Monotonicity test for direct mechanism: MSMEs With no children participation using a simple ranking approach

Note that the above Borda count ranking ignores the intensity of each choice, something we should not do especially if we wish to capture agents’ perception toward the importance of alternative choices. It is well known that in cases where agents are to rank pairs of alternatives, the intensity or degree of preferences matters. Thus, in quantifying and ranking perceptions before evaluating their implementability, we avoided using a simple ranking as in the Borda count, where respondents give a point (a number) for each choice according to their preferred order (1, 2, 3, \(\cdots \), etc). Instead, we adopted an approach based on pairwise comparisons to conform with human ability to compare alternative preferences before ranking them. Through the eigenvalue-based calculation and some mathematical procedures, we transformed these pairwise inputs into ratio scales that evoked a consistent ranking of MSMEs’ preferences and used it in the monotonicity test to evaluate its implementability.

All rankings we used in our study incorporate such an approach. In the survey, ME and MSE respondents revealed their intensity of preference toward one alternative over another when they disclosed their choices through pairwise comparisons, from which we derived the ranking of weights in the form of ratio scales. Obviously, there is a possibility that the eventual weights and ranking of the same case using this approach are different from those obtained from using the Borda count. Consequently, the results in terms of implementable policy can be different as well. The following examples show two cases using different approaches of assigning weights, one shows the same conclusions and the other gives very different conclusions.

Table 4.3 Monotonicity test for direct mechanism: MSMEs with no children participation using AHP/ANP-based ratio scale ranking approach

Consider the case under one of the structural variables we used in the survey, that is, MSME with no children participation. As shown in Table 4.2, by using a simple ranking approach the optimum policy choice for ME under \(\theta \) is “Interaction-network” (total points equal 12), while, under \(\theta '\), it is “Supporting infrastructure” (also equal 12). There is no violation of monotonicity since the relative position of “Interaction-network” falls from the first rank in \(\theta \) to the fourth in \(\theta '\). Hence, “Interaction-network” policy is implementable. When using the ranking based on AHP ratio scale (Table 4.3), the optimum policy preferred by ME and MSE under \(\theta \) is also “Interaction-network” (total points equal 0.479) and under \(\theta '\) is “Supporting infrastructure” (0.488). For the same reasons cited above, the “Interaction-network” policy is implementable. Thus, this case shows that two different ranking approaches can give a same conclusion.

Table 4.4 Monotonicity test for direct mechanism: MSMEs with a large number of children using a simple ranking approach

A completely different story, however, applies to a number of cases of policy-mix. Consider the case for MEs and MSEs with a relatively large number of children (above the median), for which the results from applying a simple ranking are shown in Table 4.4. If the true state/profile is \(\theta \), the optimal policy choice is “Interaction-network & Promotion” (total points equal 29), and it is “Supporting infrastructure & Linkage” under \(\theta '\) (total points equal 30). The “Interaction-network & Promotion” does not violate the monotonicity condition because its rank from \(\theta \) to \(\theta '\) falls from the first to the third. The indirect mechanism for this case is shown in Table 4.5. On the other hand, when we used the AHP-based ranking, the monotonicity-compliant policy-mix is “Interaction-network & Linkage” since it receives the highest rank under \(\theta \) and \(\theta '\) (Table 4.6).

Table 4.5 Indirect mechanism to align MSE and ME preferences with SCF for the case of no children participation

Clearly, the policy implication would have been different had we used a different ranking approach. Other than building network to strengthen interaction, resources, and attention would be directed toward promotion if we neglected the intensity of MSME perception about the importance of policy-mix for productivity improvements. Yet, having a linkage that reflects the quality of interaction or network is more important for productivity improvement. More interaction and network do not always result in a better outcome. The quality that could improve the effectiveness of the network and interaction matters more.

Table 4.6 Monotonicity test for direct mechanism: MSMEs with a large number of children using AHP/ANP-based ratio scale ranking approach

4.3 SC-Compatible Policies and Policy-Mix of Social Planners and MSME

In Chap. 2, we devoted our analysis to finding SC-compatible policies and policy-mix that are implementable. That analysis was made based on the results of our survey in which MEs and MSEs have their own preference ranking to achieve productivity improvements. We also showed the mechanism for those policies under limited information, that is, without knowing the true state/profile of MSE and ME. Yet, with all the information and institutional capacity they had accumulated over the years, social planners may have some knowledge and experience to select and implement policies or policy-mix. Decades of experience with successes and failures must have led to the improved set of policies. The types of the implemented government policies and how they evolved over the years allowed us to synthesize the social planners’ preferred choices of policy. Based upon those choices, we could construct the ranking to reflect the social planners’ point of view with respect to policy preferences.Footnote 3

In this context, we wish to explore the compatibility of preferences of social planners, MEs, and MSEs. Given that their preferences may be different, to what extent those preferences can coincide with each other? Which preferred policies and policy-mix are implementable, and what mechanisms can be designed for such policies? In view of what we discussed in the preceding section about employing different ranking approaches, would any implementable policies or policy-mix be different in this case?

By applying similar methods as we used for ME and MSE, Tables 4.7, 4.8, and 4.9 show the resulting ranking of SC-compatible policies according to the following three pairs: social planners and MSME, social planners and ME, and social planners and MSE, respectively.

There are clearly some ranking gaps between the social planners’ choice and what the MSMEs prefer. If improving productivity is the goal, the former tends to prioritize the need for “Financing,” but MSMEs do not think so. For MSMEs, the three top priorities are: having “Interaction-network” getting “Supporting infrastructure,” and resolving the problems surrounding “Regulation & Legal matters.” Clearly, this reflects the gap between social planners’ intention and what MSMEs need. It partly explains why some programs were not effective, having a low participation rate, or beset by problems of corruption (Berry et al., 2001; Musa & Pritana, 1998; Hill, 2001; Sandee et al., 1994; Tambunan, 2007). When MSME is broken down into ME and MSE, the top priority remains having “Interaction-network.” Interestingly, based on all the information we gathered, the social planners’ choice for “Interaction-network” is at the top priority only for MEs. For MSEs, social planners tend to believe that providing support of “Financing” is the most important measure to take.

Table 4.7 Summarized ranking of policy preferences of social planners and MSME
Table 4.8 Summarized ranking of policy preferences of social planners and ME
Table 4.9 Summarized ranking of policy preferences of social planners and MSE
Table 4.10 Summarized ranking of policy-mix preferences of social planners and MSME
Table 4.11 Summarized ranking of policy-mix preferences of social planners and ME

In terms of policy-mix, some gaps are also detected. For the overall MSMEs, the gap is again in the area related to finance. More specifically, “Affordable loan & Liquidity” was prioritized by social planners especially for MSEs, while, for MSMEs, the priority according to them should be on “Interaction-network & Linkage requirement.” The list of the ranking of SC-compatible policy-mix for MSME, ME, and MSE are shown in Tables 4.10, 4.11, and 4.12, respectively.

Table 4.12 Summarized ranking of policy-mix preferences of social planners and MSE

Given those gaps, we are interested to find out whether it is possible to find a common ground between social planners and ME and MSE such that some implementable policies and policy-mix can be found. If it is, will the ranking of those policies and policy-mix be different had ME and MSE responded to the questionnaire without considering the objectives and the challenges (in their System 1)? More importantly, what would be the mechanism that could lead to the incentive-compatible outcome for ME, MSE, and social planners?

As shown in Tables 4.8 and 4.9 earlier, when social planners’ preference is paired with that of MEs and MSEs individually, the priority ranking is different from that revealed by the ME-MSE pair. Recall that the optimal and implementable policy for the ME-MSE pair was “Interaction-network” and “Supporting infrastructure” under \(\theta \) and \(\theta '\). For the pair of social planners and ME, the optimal and implementable policy is “Interaction-network” in both \(\theta \) and \(\theta '\), the weight sum of which is 0.535 (0.239 + 0.296) and 0.458 (0.162 + 0.296), respectively (Table 4.13). For the pair of social planners and MSE, the implementable policy in \(\theta \) and \(\theta '\) is different: “Interaction-network” in the former and “Financing” in the latter. Table 4.15 displays the corresponding mechanism. Consistent with the ranking shown in Table 4.14, social planners were clearly more inclined to think that MSEs need more funding than anything else, which was in contrast to what MEs and MSEs preferred when the social planners were not included.

Table 4.13 Monotonicity test for direct mechanism: Policies for ME and social planners
Table 4.14 Monotonicity test for direct mechanism: Policies for MSE and social planners
Table 4.15 Indirect mechanism to align MSE and social planners’ preferences with SCF
Table 4.16 Monotonicity test for direct mechanism: Policy-mix for ME and social planners

Next is for the policy-mix. Based on the resulting ranking shown in Tables 4.16 and 4.17, the implementable policies in the case of social planners versus ME and social planners versus MSE are different. In the former (Table 4.16), “Affordable loan & Linkage” is the implementable measure under both \(\theta \) and \(\theta '\) with the weight sums equal to 0.282 and 0.283, respectively, while, in the latter (Table 4.17), it is a policy-mix of “Affordable loan & Liquidity” (0.280 in \(\theta \) and 0.260 in \(\theta '\)).

Table 4.17 Monotonicity test for direct mechanism: Policy-mix for MSE and social planners

The use of different ranking techniques once again is shown to produce different results. When the process of finding SC-compatible policies and policy-mix is applied to a case involving social planners, we have shown that in some cases (e.g., MSMEs with no children participation) the resulting implementable policies can be the same, but in others (e.g., MSMEs with large number of children) they are very different. The implications on policy measure are very important in that the repercussions on resource allocation can be sub-optimal. In particular, a considerable amount of resources would have been unwisely spent on promotion had the intensity of MSMEs’ perception toward policy-mix been ignored. According to our analysis, in such a particular case the focus of policy intervention ought to be directed instead toward providing the supporting infrastructure to help establish and strengthen a network that could help MSMEs to interact among themselves and with other stakeholders.