Introduction

The design of integrated circuits and systems, in particular, with analog and mixed-signal units, is a well-known subject of increasing technical and economical significance. The underlying optimization of the design parameters, from device sizing for established circuits to creation or synthesis of customized or novel circuits, traditionally is executed by experienced human designers, but the issue of analog design automation has been pursued in academia and industry for more than two decades now. The advance of Moore’s law and the increasing complexity and heterogeneity of established and leading edge processes along with increasing robustness and dependability requirements further kindle the interest in efficient analog electronics design automation. In addition to circuit sizing/creation for a new task, also the migration of existing circuit libraries to a new technology is a rewarding task for automation. Adding of statistical, drift, and aging considerations for design centering and yield optimization, including layout synthesis and post-layout simulation results in the loop, can be witnessed from concept to commercial tool level, e.g., in Muneda’s WickED framework [1]. Predominantly, uni-variate statistical approaches are used, the employed optimization methods are computationally costly, and the design process is rather opaque and lacks interactivity options for the user during the design process.

In our work, we pursue the conception of a modular, multi-platform, and open-access python system for analog design automation, denoted as Analog Block and system Sizing and sYNTHesis (ABSYNTH) framework. The research goals are the design of an evolving, self-learning architecture for incremental inclusion of cells and knowledge with efficient reuse, to employ well-performing hierarchical optimization by cascading methods, e.g., SVR and proven work-horses like PSO, and Harmony Search algorithms, during the automated design process, and adding transparence and improved user interaction by domain-specific visualization techniques. The self-learning approach promises to create a perfect balance between speed and low accuracy of function approximation techniques and high accuracy and low speed of the simulation-based techniques, additionally removing the need for massive set of examples required to train the function approximators.

In this paper, well will focus on a three-level optimization cascade and novel visualization techniques, which are demonstrated for commonly used circuits [25] nominal sizing on schematic level.

In Sect. 2, the state-of-the-art is briefly rehearsed. Section 3 describes the baseline of our research and investigation, referring to the previously outlined state-of-the-art. Section 4 presents the architecture of the design environment, and Sect. 5 introduces our custom dynamic optimization space visualization. Before concluding, Sect. 7 presents and discusses our experiments and obtained results.

State-of-the-art

Automated analog design automation has been subject of intensive research for the last three decades starting with works like IDAC [6] and OASYS [7]. There are four major approaches to tackle this problem: (1) knowledge-based [6, 8], (2) equation-based [9, 10], (3) simulation-based [11, 12], and (4) model-based approaches [13, 14] as surveyed in [15]. Each approach has its advantages and disadvantages. The first two approaches require considerable preparatory work for each circuit individually. The substantial time and expertise required for this preparatory work is not alleviating the introduction of more circuits, the growth of the design database, and the corresponding increase in productivity [16]. Simulation-based approaches can be very effective in this regard, however, they consume a lot of computational resources and time, if applied in straight or flat form. With the increase in the speed of computing machinery, this problem is not as significant as it has been a decade ago, but the complexity of the design tasks is also increasing. The fourth approach deals with using regression methods like support vector regression (SVR) and neural networks to create a model-based equivalent representation as a replacement for simulators and costly detailed simulation runs. Though the model-based computations are extremely fast, they require careful training and a significant amount of samples selected in a time-consuming and sensitive process for each circuit and process technology pair to reach an acceptable prediction accuracy [17].

Optimization algorithms are an essential part of the last three approaches. While, older equation-based approaches used simulated annealing [12] genetic algorithms [18] etc., approaches with posynomial equations and geometric programming are becoming more predominant in the last decade [1921]. From our survey, we found that genetic algorithms (GA), genetic programming and simulated annealing have been predominantly used in simulator- and model-based approaches. Other evolutionary algorithms such as Particle swarm optimization (PSO) [22], differential evolution (DE), harmony search (HS) [23], and artificial bee colony optimization (ABC) which have become popular in other fields have also been used to a lesser degree in these approaches. Some notable works are [25]. In the first work [5], ultra-low-power Miller OTA and a three-stage Miller op-amp were sized using PSO, GA, and a modified PSO called HPSO and found that HPSO converges better than the alternatives. In [2] OTA Miller has been optimized using DE, PSO and ABC they are compared. They found that DE performs better than PSO, while ABC does not reach the targets in this scenario. In the third work [3], nth order filters were optimized by PSO, ABC, HS, and DE. They found that, while HS is the fastest algorithm, it has the highest error while the other algorithms converged better. In [4], comparing DE and HS, the authors also came to a similar conclusion. In our experiments, comparing standard PSO (SPSO) 2007, SPSO 1995, harmony search, and cuckoo search, we found that PSO (SPSO95) and HS are able to provide fast solutions reaching our desired target values. We will be employing PSO and HS in our work in a more involved way, both individually for reference purpose as well as in a dedicated hierarchy. We will also show that our approach produces effective results even with default parameter settings.

Fig. 1
figure 1

Schematic of Miller amplifier used in this work

Baseline of investigation

In general, analog sizing and synthesis has been applied to a rich variety of practical circuits, e.g., filters, oscillators, PLLs, amplifiers and comparators, and in part new circuits have been created or evolved by the algorithms. However, in most of the work we refer to, the focus has been on amplifier circuits. Thus, we considered three typical single-ended op amp circuits of increasing complexity as a research vehicle for this work. These are a two-stage Miller amplifier (MA) [24], three-stage buffer amplifier (BA) [24], and a folded-cascode amplifier (FCA). For all these circuits, for each transistor the length was fixed as 1 \(\upmu \)m and the width was chosen as a design parameter. Narrowing down the degrees of freedom by choosing a unit length of all transistors, the number of devices corresponds to the number of optimization parameters. These variables are the parameters that shall be used for the optimization process inside the framework. There is one restriction in the investigations: the passive components are fixed to a typical value and are not yet subject to optimization themselves. The knowledge-based information such as matching information or symmetry constraints for transistor pairs were obtained from the schematic and provided to the systems optimization engine, further reducing the number of parameters.

Circuits

The MA shown in Fig. 1 is taken from [24]. It consists of ten transistors. After symmetry considerations, we optimize eight design parameters, while pursuing ten objectives in the multi-objective approach, as shown in Table 4.

The second amplifier is a BA shown in the Fig. 2. It is taken from [24]. It consists of twenty-four transistors. After symmetry considerations we optimize nine parameters, while pursuing ten objectives in the multi-objective approach, as shown in Table 4.

Fig. 2
figure 2

Schematic of buffer amplifier used in this work

The third amplifier is an FCA [24] as shown in the Fig. 3. The circuit consists of twenty-nine transistors. After symmetry considerations, we optimize 18 parameters, while again pursuing ten objectives in the multi-objective approach, as shown in Table 4. The vast number of transistors is attributed to the type of bias circuit used, while constructing this circuit. This FCA was created in a practical activity of our group as a part of a voltage-controlled voltage-source for impedance spectroscopy.

Fig. 3
figure 3

Schematic of folded-cascode amplifier used in this work

Meta-heuristic algorithms

In this work, we focus on two algorithms mentioned in Sect. 2. The first one is PSO which has been used in analog design automation before in [2, 3, 5]. In our work, we will use the PSO as presented in [22]. The parameters of PSO are described below.

  • w: Inertia. typ.range: [0, 1].

  • c1: Cognitive scaling factor. typ. range: [0, 2].

  • c2: Social scaling factor. typ. range: [0, 2].

  • r1, r2: Random values between 0 and 1.

  • Velocity: Particle’s velocity.

  • Local: Particle’s local best known position.

  • Global: Swarm’s best known position.

  • Current: Current position of the particle.

The second algorithm is harmony search (HS). It is a music-inspired algorithm, which has been studied for analog design automation in a few papers [3, 4]. In this work, we use harmony search as described by the algorithm [23].

The parameters of the HS algorithm are provided below:

  • HMS (harmony memory size): problem dependent.

  • HMCR (harmony memory considering rate): typ.range: [0.7, 0.99].

  • PAR (pitch adjusting rate): typ range: [0.1, 0.5].

Multi-objective optimization approach

We employ an agglomerative approach for fitness function computation in multi-objective optimization, which makes use of the weighted sum of normalized fitness values of each parameter. A thresholded normalized difference between the target specifications and the simulator outputs are used to obtain the individual specifications as described in Eq. (1).

$$\begin{aligned} f = \left\{ \begin{array}{ll} (\mathrm{ov} - \mathrm{spec.})/\mathrm{spec.} &{} \quad \text {for minimum search}\\ (\mathrm{spec.} - \mathrm{ov})/\mathrm{spec.} &{} \quad \text {for maximum search} \end{array} \right. \end{aligned}$$
$$\begin{aligned} fit = \left\{ \begin{array}{ll} f &{} \quad \text {if} \; f > 0 \\ 0 &{} \quad \text {if} \; f < 0 \end{array} \right. \end{aligned}$$
(1)

where ov is the obtained value from function approximators or simulators. Due to this advantageous normalization, for the regarded moderate complexity circuits, unity weights could be successfully employed unlike in previous works [25], where the finding of appropriate weights for unnormalized cost function represented a major challenge. We have found by moderate sensitivity investigations that other than unity weights will not have perceivable advantages in result quality. Only if one of several objectives are esteemed considerably higher in value than the others, then the application of non-unity weights will be meaningful.

Table 1 Simulator execution time for one fitness run of Miller amplifier
Table 2 Execution time for simulators with PSO algorithm on Miller amplifier

Time measurement

In our work, we deal with two simulators, the open source ngspice simulator (NGS) and cadence virtuoso OCeaN (OCN). We use AMS hitkit 4.1 with 350 nm technology. All the experiments were performed on a Core2 Duo PC with 2 GHz frequency and 4 GB memory. We have used the linux time command to measure the time taken for the simulations. This provided three time values, real time, which is the same as wall clock time, user time, which is the time when the simulation was using the CPU resources, and system time, when the program was accessing the kernel. The sum of user time and system time would provide information independent of the other processes running in the system. In all cases, the time values mentioned are the mean of five or more runs.

Fig. 4
figure 4

ABSYNTH flow diagram showing the concept and its work flow with available methods shown in grey

ABSYNTH concept and architecture

Hybrid multi-objective optimization approach

ABSYNTH concept is shown in Fig. 4. As we have mentioned in Sect. 2, the accuracy vs. speed properties of model-based and simulator-based approaches are substantially different, which is illustrated in Fig. 5. The time comparison in running simulations with these methods is provided in Tables 1 and 2. Further, the BSIM3v3 version employed by cadence and ams hitkit is 3.24, while ngspice uses the newer BSIM3v3 version 3.3. This leads to subtle discrepancies in the results, which are not harmful in our hierarchical approach (Fig. 5).

From the information above, we can come to a conclusion that seeding the more accurate simulators with the results of less accurate ones will provide a nice balance. This is exploited in our work by cascading the methods as shown in Fig. 1. The mixture of random seeds and seeds from the simulators help in maintaining the diversity. This approach has shown to be very robust to problems with the insufficiently trained SVR model during the start-up phase of the self-learning system explained in Sect. 4.2, as it will only increase the time consumed to reach the results, but will not affect their quality.

Fig. 5
figure 5

Comparison of speed and accuracy between function approximators and simulation tools

figure a

Incrementally evolving self-learning capability

In standard model-based approaches using function approximators, usually neural networks [26] or support vector regression [13, 17] methods, etc., are employed together with sufficient and suitable training data for each investigated circuit to predict the simulator results [17]. Acquiring these data and performing the training of the named methods take substantial time until acceptable results are achieved. This implies, that for every new circuit the designer has to cope with this overhead, even for sophisticated methods, like the active training described in [17]. In our work, in contrast, we try to incrementally obtain the initial SVR training data for a new circuit from the results generated by the meta-heuristic algorithms on NGS and cadence OCN simulators from previous runs. Thus, we have a high initial simulation effort for a new circuit, which decreases with the number of conducted circuit simulations in our evolving self-learning architecture. The procedure is transparent to the user, i.e., there is no workload overhead imposed on the user. This is illustrated in Fig. 6. The parameters of the SVR \(\gamma \), \(\varepsilon \) and K are found automatically using the same meta-heuristic algorithms again, e.g., PSO or HS, as an efficient alternative to the commonly used grid search. This search is done only during the first training phase for each new circuit. Here, the percentage of random seeds, as shown in Algorithm 1, can be controlled based on the number of iterations in NGS for effective learning in SVR.

Fig. 6
figure 6

Incrementally evolving self-learning architecture. The size of the circle denotes the time while the pie chart shows the effort distribution

Fig. 7
figure 7

Current status of ABSYNTH implementation. The planned future improvements are in grey

Status of the ABSYNTH architecture’s implementation

Figure 7 shows a block diagram with all the current elements and the building blocks planned in the immediate future of this work.

Fig. 8
figure 8

Visualization techniques currently used in ABSYNTH. The goalpost view is similar to a orthogonal version of the radar plot

TRAVISOS: optimization space visualization

The monitoring of the optimization process by visualization means adds transparency to the design process and allows for assessing the quality of the obtained solution [27, 28]. In addition to the conventional cost function over time-based visualizations, as shown in Fig. 8, the optimization space itself and the evolution of the regarded population of optimization solutions can be elucidated by suitable visualization techniques. The underlying problem of optimization space visualization is quite related to the well-known task of feature space visualization in pattern recognition and intelligent system design [29, 30]. As in these related fields, the high-dimensional data, comprised here by the design or sizing parameters in analog circuit and system design automation, have to be subject to a dimensionality reducing mapping, as, e.g., multi-dimensional-scaling (MDS) and, in particular, non-linear-mapping (NLM) methods, like Sammon’s mapping and its extension to data recall (NLMR) [27, 29, 31]. The application of these methods allow the generation of a lower dimensional, e.g., three-dimensional, similarity preserving scatter plot, which will show solution quality and relative location of the solutions. For instance, the latter information allows to understand which regions of the solution space have been explored and what level of diversity is currently maintained by the optimization algorithm. This can naturally be extended from single snapshot monitoring of the optimization process to a complete solution swarm trajectory visualization. As will be shown in the following experimental section, our suggested visualization approach can give numerous salient insights not to be obtained from the conventional cost or progress curve plots. In addition, the approach opens the door to interactive visualization and optimization [28] by allowing selective user manipulations from one population to the next. The proposed new heuristic method for solution swarm trajectory visualization in the regarded optimization space is illustrated in Fig. 2. First, a standard NLM is computed based on the initialization data of the optimization problem. Then, the projections of all swarm elements for the next and all following populations will be computed by the NLMR, which uses the previously obtained results as anchor points. Thus, at low cost successive mappings with smooth transitions of solution locations for the trajectory visualization can be computed. However, it is a well-known fact that all dimensionality reducing mappings have their problems in terms of displaying an unavoidable mapping error related to the intrinsic dimensionality of the data to be mapped as well as to the employed mapping method itself. This means that solutions with unchanged location in the original design parameter space could see unjustified and disturbing location fluctuations in successive projections.

To avoid this problem, in our work, the previous position in the projection space of solutions with unchanged location in the design space will be just copied to the new projection space of the next optimization iteration, only the solutions with changes will be subject to NLMR projection.

Fig. 9
figure 9

One run of the described work flow showing the integrated visualization techniques employed in the front-end user interface of ABSYNTH

figure b

Summarizing, the TRAVISOS heuristic mapping approach given in Fig. 2 allows the creation of solution swarm trajectory visualization in the regarded optimization space and problem at low to moderate computational cost. This can be employed for transparent analysis and user-centered interactive optimization or designer-in-the-loop optimization, e.g., [27]. The suggested TRAVISOS algorithm and the overall approach are salient for but not limited to the analog sizing and synthesis activities regarded in this work.

Experiments and results

Circuit sizing methods results

As prepared in the previous sections, here we study two different optimization cascades embedded in our hybrid self-learning architecture. The first optimization cascade will employ PSO algorithm for all three steps in the cascade. This will be referred to as PPP. The second approach employs Harmony Search algorithm on the model-based (SVR) and NGS levels, while PSO is the choice for OCN simulation level. Table 4 shows the results for mean of a minimum of five runs of the algorithm. The target specifications, which are composed of 10 objectives in the multi-objective approach, have been taken from the respective references mentioned above with the circuits. We have also added the resulting specifications manually reached in our group, by students with moderate experience in analog design. A flat reference run of PSO OCN combination is additionally added for comparison. All the transistor sizes are in integer steps in this work; however, resolution can be changed as desired. The experiment parameters can be found in Table 3. The minimum and maximum values shown in Table 3 are the soft boundary conditions for the algorithms. These are also the transistor size limits. At present, area has not been included as an optimization parameter in the algorithm (Fig. 9).

Table 3 Experiment parameters used for experiments in Table 4 and visualizations
Table 4 Result assessment of hybrid vs. manual and flat approaches
Fig. 10
figure 10

Conventional visualization of partially evolved examples for MA, BA and FCA

Fig. 11
figure 11

3D TRAVISOS visualization showing the comparison between insufficiently trained SVR and NGS for FCA using the same parameters. Here, an 18-dimensional space has been reduced to three dimensions using TRAVISOS algorithm

Fig. 12
figure 12

Comparison of conventional visualization and TRAVISOS 1: conventional visualization

Fig. 13
figure 13

Comparison of conventional visualization and TRAVISOS 2: TRAVISOS SVR generation 0

From these results we can see that both the algorithm cascades work at least competitively and are able to reach their targets much faster than the corresponding manual and flat approaches. We can also see that using Harmony search for providing the seeds is faster than using PSO without any reduction in the quality of results. In Fig. 10, we can see the progress of the fitness function over each iteration for one example per type. In the case of the MA, we can see how the fitness improves ideally for both PPP and HHP approaches (HHP reaches the target in lesser number of iterations.). In the second scenario with BA, we can observe that the SVR model fitness and the ngspice fitness varies, this is because of the models training and the scale of the Y-axis, which is much smaller for this example. In the FCA example, the SVR is unable to reach a meaningful fitness value; however, the NGS and OCN are able to reach the targets. These examples underpin the robustness of such a hybrid approach. From Fig. 10a and Fig. 12, we can observe the evolution of MA as described in Fig. 6. Even though the latter takes more iterations than the former, the computational effort is much less. Further, it is possible to stop the SVR much earlier, when it reaches saturation with little or no error reduction (see Fig. 12).

Fig. 14
figure 14

Comparison of conventional visualization and TRAVISOS 3: TRAVISOS SVR generation 1300

Fig. 15
figure 15

Comparison of conventional visualization and TRAVISOS 4: TRAVISOS SVR generation 1953

Fig. 16
figure 16

Comparison of conventional visualization and TRAVISOS 5: TRAVISOS NGS generation 1 (1954 in Fig. 12)

Fig. 17
figure 17

Comparison of conventional visualization and TRAVISOS 6: TRAVISOS OCN generation 3 (1957 in Fig. 12). A complementary video showing the complete process can be found in [32]

Visualization methods results

In this section, we would like to demonstrate the advantages of the suggested TRAVISOS method in giving additional insight into crucial steps of the optimization processes by selected examples and visualization snapshots from our investigations reported above. However, the main advantage of TRAVISOS method and tool will only be fully be visible, when it is interactively employed in the ABSYNTH design framework. We have attached a complementary video (online) [32] generated from visualizations of all optimization iterations for one example of PSO with Miller amplifier. Each point in the video represents one solution. One has be to reminded about the fact that in the given form of visualization, reducing from ten or more dimensions down to three, the values given on the three axes only express similarity or closeness of design parameter sets, but have no explicit physical meaning.

Table 5 Comparison of different ADA techniques with ABSYNTH

First, we regard here the example of the FCA discussed above. Though the conventional visualization given in Fig. 10c is salient, more relevant information can be provided. In Fig. 11, we have visualized snapshots of one particular, tentatively trained SVR results for PPP by TRAVISOS approach. Additionally, we have run a complete simulation using ngspice, merged the two data sets, and visualized the resulting solution space.

From Fig. 10c, we cannot understand the issue, as information on solution diversity, clogging, clustering, or coverage in the optimization space cannot explicitly be extracted. In contrast, TRAVISOS allows this, showing that the SVR model here has not been trained with sufficient data and it is compelled to move towards one, obviously not too fortunate region in the optimization space, while the ngspice results, which have achieved good fitness, are more diverse and spread out into more fortunate regions of the optimization space. The visualization helps to understand the optimization space, as well as the current aptness of SVR training for new cells, and opens the door for interactive optimization.

In Figs. 12, 13, 14, 15, 16 and 17, the entire cascade of the optimization process is visualized by the TRAVISOS method, limiting to a series of representative snapshots here. A complementary video showing the process can be found in [32].

These clearly show the movement of the PSO particles for the SVR model in the optimization space and the evolution of the solution quality. Then, it can be monitored, how these are translated into solutions in the NGS simulations. In the end, we see the final solutions achieved by the OCN.

While the training of the SVR model in general can be understood as a continuing process without a definite termination, the visualization can help to assess whether a sufficient training quality has been achieved. In the investigated case, SVR seems to have been satisfactorily trained, as can be understood from the unchanged solution space for the final result.

This is confirmed by the following simulations steps, since the NGS simulations finish in one generation and OCN reach the target in just three generations.

Summarizing, the TRAVISOS method complements conventional graphical monitoring and assessment techniques of optimization processes. Even the simple examples of the first realization step given here show that salient additional information to better understand and in the future guide the optimization and the underlying design process are provided.

Conclusions

In this paper, we have introduced three novel contributions to the vivid field of electronic design automation for analog and mixed-signal circuits and systems. Inspired by concepts from computational intelligence, we introduced an evolving self-learning architecture that alleviates the introduction of new circuits into the supported cell spectrum, and we introduced and demonstrated a hierarchical multi-objective optimization cascade from SVR, PSO, and HS in the context of this architecture that saves effort in general and is flexible with regard to existing a priori knowledge vs. required computational effort. This approach was demonstrated as part of our emerging ABSYNTH environment together with cadence tools, ngspice and ams AG 0.35 \(\upmu \)m CMOS technology on schematic level for amplifier structures commonly used in related work, but with a more comprehensive set of specification values as optimization goals in an agglomerative approach. Results compared to conventional flat optimization approaches and comparison to manual design activities showed the viability and salience of our approach, e.g., for the best case of the Miller amplifier and HHP as the best variant of our approach, consumed only 26% time of the flat approach, while fully meeting the specifications. A final comparison between the properties and advantages of ABSYNTH to other ADA methods is shown in Table 5.

Further, we introduced a novel visualization method of the optimization space and trajectory (TRAVISOS) that allows more efficient and transparent human supervision of optimization process properties, e.g., diversity and neighborhood relations of solution qualities.

In future work, we will extend the palette of pursued specifications values also to area, etc., take our work to higher level circuits, e.g., instrumentation amplifiers, filters, phase-locked loops, and voltage-controlled current sources, or even non-linear circuits, and add statistical, yield-related optimization, circuit breeding, as well as physical layout generation, extraction, and inclusion in the optimization loop. In particular, we will also extend our work on the TRAVISOS method moving it from an analysis to an interactive tool, achieving designer-in-the-loop functionality, i.e., letting the designer observe, potentially interfere, and guide the optimization by existing expert knowledge or intuition to faster explore better locations in the optimization space.