We formulate an equivalence between machine learning and the formulation of statistical data assimilation as used widely in physical and biological sciences. The correspondence is that layer number in a feedforward artificial network setting is the analog of time in the data assimilation setting. This connection has been noted in the machine learning literature. We add a perspective that expands on how methods from statistical physics and aspects of Lagrangian and Hamiltonian dynamics play a role in how networks can be trained and designed. Within the discussion of this equivalence, we show that adding more layers (making the network deeper) is analogous to adding temporal resolution in a data assimilation framework. Extending this equivalence to recurrent networks is also discussed. We explore how one can find a candidate for the global minimum of the cost functions in the machine learning context using a method from data assimilation. Calculations on simple models from both sides of the equivalence are reported. Also discussed is a framework in which the time or layer label is taken to be continuous, providing a differential equation, the Euler-Lagrange equation and its boundary conditions, as a necessary condition for a minimum of the cost function. This shows that the problem being solved is a two-point boundary value problem familiar in the discussion of variational methods. The use of continuous layers is denoted "deepest learning." These problems respect a symplectic symmetry in continuous layer phase space. Both Lagrangian versions and Hamiltonian versions of these problems are presented. Their well-studied implementation in a discrete time/layer, while respecting the symplectic structure, is addressed. The Hamiltonian version provides a direct rationale for backpropagation as a solution method for a certain two-point boundary value problem.

}, keywords = {algorithm, bound-constrained optimization, Computer Science, integrators, Neurosciences \& Neurology}, isbn = {0899-7667}, doi = {10.1162/neco_a_01094}, url = {Networks of nonlinear systems contain unknown parameters and dynamical degrees of freedom that may not be observable with existing instruments. From observable state variables, we want to estimate the connectivity of a model of such a network and determine the full state of the model at the termination of a temporal observation window during which measurements transfer information to a model of the network. The model state at the termination of a measurement window acts as an initial condition for predicting the future behavior of the network. This allows the validation (or invalidation) of the model as a representation of the dynamical processes producing the observations. Once the model has been tested against new data, it may be utilized as a predictor of responses to innovative stimuli or forcing. We describe a general framework for the tasks involved in the "inverse" problem of determining properties of a model built to represent measured output from physical, biological, or other processes when the measurements are noisy, the model has errors, and the state of the model is unknown when measurements begin. This framework is called statistical data assimilation and is the best one can do in estimating model properties through the use of the conditional probability distributions of the model state variables, conditioned on observations. There is a very broad arena of applications of the methods described. These include numerical weather prediction, properties of nonlinear electrical circuitry, and determining the biophysical properties of functional networks of neurons. Illustrative examples will be given of (1) estimating the connectivity among neurons with known dynamics in a network of unknown connectivity, and (2) estimating the biophysical properties of individual neurons in vitro taken from a functional network underlying vocalization in songbirds. Published by AIP Publishing.

}, isbn = {1054-1500}, doi = {10.1063/1.5001816}, url = {We assess the utility of an optimization-based data assimilation (D.A.) technique for treating the problem of nonlinear neutrino flavor transformation in core-collapse supernovae. D.A. uses measurements obtained from a physical system to estimate the state variable evolution and parameter values of the associated model. Formulated as an optimization procedure, D.A. can offer an integration-blind approach to predicting model evolution, which offers an advantage for models that thwart solution via traditional numerical integration techniques. Further, D.A. performs most optimally for models whose equations of motion are nonlinearly coupled. In this exploratory work, we consider a simple steady-state model with two monoenergetic neutrino beams coherently interacting with each other and a background medium. As this model can be solved via numerical integration, we have an independent consistency check for D.A. solutions. We find that the procedure can capture key features of flavor evolution over the entire trajectory, even given measurements of neutrino flavor only at the endpoint, and with an assumed known initial flavor distribution. Further, the procedure permits an examination of the sensitivity of flavor evolution to estimates of unknown model parameters, locates degeneracies in parameter space, and can identify the specific measurements required to break those degeneracies.

}, keywords = {core, early universe, instabilities, matter, nucleosynthesis, oscillations, quantum kinetics, spin, supernovae, transformation}, isbn = {2470-0010}, doi = {10.1103/PhysRevD.96.083008}, url = {The problem of forecasting the behavior of a complex dynamical system through analysis of observational time-series data becomes difficult when the system expresses chaotic behavior and the measurements are sparse, in both space and/or time. Despite the fact that this situation is quite typical across many fields, including numerical weather prediction, the issue of whether the available observations are "sufficient" for generating successful forecasts is still not well understood. An analysis by Whartenby et al. (2013) found that in the context of the nonlinear shallow water equations on a beta plane, standard nudging techniques require observing approximately 70\% of the full set of state variables. Here we examine the same system using a method introduced by Rey et al. (2014a), which generalizes standard nudging methods to utilize time delayed measurements. We show that in certain circumstances, it provides a sizable reduction in the number of observations required to construct accurate estimates and high-quality predictions. In particular, we find that this estimate of 70\% can be reduced to about 33\% using time delays, and even further if Lagrangian drifter locations are also used as measurements.

}, keywords = {4-dimensional variational assimilation, chaotic systems, dynamics, equations, field, lagrangian data, number, observability, operational implementation, parameter-estimation}, isbn = {1023-5809}, doi = {10.5194/npg-24-9-2017}, url = {Data assimilation variational principles (4D-Var) exhibit a natural symplectic structure among the state variables x(t) and. x(t). We explore the implications of this structure in both Lagrangian coordinates {x(t), x(t)} andHamiltonian canonical coordinates {x(t), p(t)} through a numerical examination of the chaotic Lorenz 1996 model in ten dimensions. We find that there are a number of subtleties associated with discretization, boundary conditions, and symplecticity, suggesting differing approaches when working in the the Lagrangian versus the Hamiltonian description. We investigate these differences in detail, and accordingly develop a protocol for searching for optimal trajectories in a Hamiltonian space. We find that casting the problem into canonical coordinates can, in some situations, considerably improve the quality of predictions.

}, keywords = {4d-var, chaos, continuous-time, Data assimilation, dynamical systems, error, Hamiltonian systems, integrators, Laplace{\textquoteright}s method, mechanics, operational implementation, symplectic integration, variational principle}, isbn = {0035-9009}, doi = {10.1002/qj.2962}, author = {Kadakia, N. and Rey, D. and Ye, J. and Abarbanel, H. D. I.} } @article {40833, title = {Nonlinear statistical data assimilation for HVCRA neurons in the avian song system}, journal = {Biological Cybernetics}, volume = {110}, number = {6}, year = {2016}, note = {n/a}, month = {2016/12}, pages = {417-434}, type = {Article}, abstract = {With the goal of building a model of the HVC nucleus in the avian song system, we discuss in detail a model of HVCRA projection neurons comprised of a somatic compartment with fast Na+ and K+ currents and a dendritic compartment with slower Ca2+ dynamics. We show this model qualitatively exhibits many observed electrophysiological behaviors. We then show in numerical procedures how one can design and analyze feasible laboratory experiments that allow the estimation of all of the many parameters and unmeasured dynamical variables, given observations of the somatic voltage V-s(t) alone. A key to this procedure is to initially estimate the slow dynamics associated with Ca, blocking the fast Na and K variations, and then with the Ca parameters fixed estimate the fast Na and K dynamics. This separation of time scales provides a numerically robust method for completing the full neuron model, and the efficacy of the method is tested by prediction when observations are complete. The simulation provides a framework for the slice preparation experiments and illustrates the use of data assimilation methods for the design of those experiments.

}, keywords = {Data assimilation, dynamical estimation, dynamical systems, excitation, inhibition, Ion channel properties, models, neuron models, Neuronal dynamics, parameter estimation, parameter-estimation, sequence generation, song system, spiking, variational-methods, voltage recordings, zebra finch}, isbn = {0340-1200}, doi = {10.1007/s00422-016-0697-3}, url = {We propose a functional architecture of the adult songbird nucleus HVC in which the core element is a "functional syllable unit" (FSU). In this model, HVC is organized into FSUs, each of which provides the basis for the production of one syllable in vocalization. Within each FSU, the inhibitory neuron population takes one of two operational states: 1) simultaneous firing wherein all inhibitory neurons fire simultaneously, and 2) competitive firing of the inhibitory neurons. Switching between these basic modes of activity is accomplished via changes in the synaptic strengths among the inhibitory neurons. The inhibitory neurons connect to excitatory projection neurons such that during state 1 the activity of projection neurons is suppressed, while during state 2 patterns of sequential firing of projection neurons can occur. The latter state is stabilized by feedback from the projection to the inhibitory neurons. Song composition for specific species is distinguished by the manner in which different FSUs are functionally connected to each other. Ours is a computational model built with biophysically based neurons. We illustrate that many observations of HVC activity are explained by the dynamics of the proposed population of FSUs, and we identify aspects of the model that are currently testable experimentally. In addition, and standing apart from the core features of an FSU, we propose that the transition between modes may be governed by the biophysical mechanism of neuromodulation.

}, keywords = {behavior, brain-stem, central pattern generator, competition, computational model, dynamical systems, dynamics, hvc, in-vivo, inhibition, motor, Neural, neurons, sequences, ventral tegmental area, winnerless, zebra finch}, isbn = {0022-3077}, doi = {10.1152/jn.00438.2016}, url = {We report on the construction of neuron models by assimilating electrophysiological data with large-scale constrained nonlinear optimization. The method implements interior point line parameter search to determine parameters from the responses to intracellular current injections of zebra finch HVC neurons. We incorporated these parameters into a nine ionic channel conductance model to obtain completed models which we then use to predict the state of the neuron under arbitrary current stimulation. Each model was validated by successfully predicting the dynamics of the membrane potential induced by 20-50 different current protocols. The dispersion of parameters extracted from different assimilation windows was studied. Differences in constraints from current protocols, stochastic variability in neuron output, and noise behave as a residual temperature which broadens the global minimum of the objective function to an ellipsoid domain whose principal axes follow an exponentially decaying distribution. The maximum likelihood expectation of extracted parameters was found to provide an excellent approximation of the global minimum and yields highly consistent kinetics for both neurons studied. Large scale assimilation absorbs the intrinsic variability of electrophysiological data over wide assimilation windows. It builds models in an automatic manner treating all data as equal quantities and requiring minimal additional insight.

}, keywords = {circuits, computational models, convergence, hvc, neurons, parameter-estimation, search algorithm, specializations, taeniopygia-guttata, thalamic relay neurons, zebra finch}, isbn = {2045-2322}, doi = {10.1038/srep32749}, url = {In statistical data assimilation one evaluates the conditional expected values, conditioned on measurements, of interesting quantities on the path of a model through observation and prediction windows. This often requires working with very high dimensional integrals in the discrete time descriptions of the observations and model dynamics, which become functional integrals in the continuous-time limit. Two familiar methods for performing these integrals include (1) Monte Carlo calculations and (2) variational approximations using the method of Laplace plus perturbative corrections to the dominant contributions. We attend here to aspects of the Laplace approximation and develop an annealing method for locating the variational path satisfying the Euler-Lagrange equations that comprises the major contribution to the integrals. This begins with the identification of the minimum action path starting with a situation where the model dynamics is totally unresolved in state space, and the consistent minimum of the variational problem is known. We then proceed to slowly increase the model resolution, seeking to remain in the basin of the minimum action path, until a path that gives the dominant contribution to the integral is identified. After a discussion of some general issues, we give examples of the assimilation process for some simple, instructive models from the geophysical literature. Then we explore a slightly richer model of the same type with two distinct time scales. This is followed by a model characterizing the biophysics of individual neurons.

}, isbn = {1539-3755}, doi = {10.1103/PhysRevE.92.052901}, url = {Most data based state and parameter estimation methods require suitable initial values or guesses to achieve convergence to the desired solution, which typically is a global minimum of some cost function. Unfortunately, however, other stable solutions (e.g., local minima) may exist and provide suboptimal or even wrong estimates. Here, we demonstrate for a 9-dimensional Lorenz-96 model how to characterize the basin size of the global minimum when applying some particular optimization based estimation algorithm. We compare three different strategies for generating suitable initial guesses, and we investigate the dependence of the solution on the given trajectory segment (underlying the measured time series). To address the question of how many state variables have to be measured for optimal performance, different types of multivariate time series are considered consisting of 1, 2, or 3 variables. Based on these time series, the local observability of state variables and parameters of the Lorenz-96 model is investigated and confirmed using delay coordinates. This result is in good agreement with the observation that correct state and parameter estimation results are obtained if the optimization algorithm is initialized with initial guesses close to the true solution. In contrast, initialization with other exact solutions of the model equations (different from the true solution used to generate the time series) typically fails, i.e., the optimization procedure ends up in local minima different from the true solution. Initialization using random values in a box around the attractor exhibits success rates depending on the number of observables and the available time series (trajectory segment). (C) 2015 AIP Publishing LLC.

}, keywords = {algorithm, automatic differentiation, continuous-time, Data assimilation, nonlinear-systems, series, synchronization, system identification}, isbn = {1054-1500}, doi = {10.1063/1.4920942}, url = {Cardiac rhythm management devices provide therapies for both arrhythmias and resynchronisation but not heart failure, which affects millions of patients worldwide. This paper reviews recent advances in biophysics and mathematical engineering that provide a novel technological platform for addressing heart disease and enabling beat-to-beat adaptation of cardiac pacing in response to physiological feedback. The technology consists of silicon hardware central pattern generators (hCPGs) that may be trained to emulate accurately the dynamical response of biological central pattern generators (bCPGs). We discuss the limitations of present CPGs and appraise the advantages of analog over digital circuits for application in bioelectronic medicine. To test the system, we have focused on the cardio-respiratory oscillators in the medulla oblongata that modulate heart rate in phase with respiration to induce respiratory sinus arrhythmia (RSA). We describe here a novel, scalable hCPG comprising physiologically realistic (Hodgkin-Huxley type) neurones and synapses. Our hCPG comprises two neurones that antagonise each other to provide rhythmic motor drive to the vagus nerve to slow the heart. We show how recent advances in modelling allow the motor output to adapt to physiological feedback such as respiration. In rats, we report on the restoration of RSA using an hCPG that receives diaphragmatic electromyography input and use it to stimulate the vagus nerve at specific time points of the respiratory cycle to slow the heart rate. We have validated the adaptation of stimulation to alterations in respiratory rate. We demonstrate that the hCPG is tuneable in terms of the depth and timing of the RSA relative to respiratory phase. These pioneering studies will now permit an analysis of the physiological role of RSA as well as its any potential therapeutic use in cardiac disease.

}, keywords = {algorithm, compartmental, congestive-heart-failure, dynamic clamp, efficiency, healthy humans, neural models, neurons, pulmonary gas-exchange, rate-variability, respiratory sinus arrhythmia, search}, isbn = {0022-3751}, doi = {10.1113/jphysiol.2014.282723}, url = {Data assimilation transfers information from an observed system to a physically based model system with state variables x(t). The observations are typically noisy, the model has errors, and the initial state x(t(0)) is uncertain: the data assimilation is statistical. One can ask about expected values of functions h \< G(X)\> on the path X = {x(t(0)), ..., x(t(m))} of the model state through the observation window t(n) = {t(0),..., t(m)). The conditional (on the measurements) probability distribution P(X) = exp[-A(0)(X)] determines these expected values. Variational methods using saddle points of the "action" A(0)(X), known as 4DVar (Talagrand and Courtier, 1987; Evensen, 2009), are utilized for estimating \< G(X)\>. In a path integral formulation of statistical data assimilation, we consider variational approximations in a realization of the action where measurement errors and model errors are Gaussian. We (a) discuss an annealing method for locating the path X-0 giving a consistent minimum of the action A(0)(X-0), (b) consider the explicit role of the number of measurements at each t n in determining A(0)(X-0), and (c) identify a parameter regime for the scale of model errors, which allows X-0 to give a precise estimate of \< G(X-0)\> with computable, small higher-order corrections.

}, isbn = {1023-5809}, doi = {10.5194/npg-22-205-2015}, url = {Information in measurements of a nonlinear dynamical system can be transferred to a quantitative model of the observed system to establish its fixed parameters and unobserved state variables. After this learning period is complete, one may predict the model response to new forces and, when successful, these predictions will match additional observations. This adjustment process encounters problems when the model is nonlinear and chaotic because dynamical instability impedes the transfer of information from the data to the model when the number of measurements at each observation time is insufficient. We discuss the use of information in the waveform of the data, realized through a time delayed collection of measurements, to provide additional stability and accuracy to this search procedure. Several examples are explored, including a few familiar nonlinear dynamical systems and small networks of Colpitts oscillators.

}, isbn = {1539-3755}, doi = {10.1103/PhysRevE.90.062916}, url = {Recent results demonstrate techniques for fully quantitative, statistical inference of the dynamics of individual neurons under the Hodgkin-Huxley framework of voltage-gated conductances. Using a variational approximation, this approach has been successfully applied to simulated data from model neurons. Here, we use this method to analyze a population of real neurons recorded in a slice preparation of the zebra finch forebrain nucleus HVC. Our results demonstrate that using only 1,500 ms of voltage recorded while injecting a complex current waveform, we can estimate the values of 12 state variables and 72 parameters in a dynamical model, such that the model accurately predicts the responses of the neuron to novel injected currents. A less complex model produced consistently worse predictions, indicating that the additional currents contribute significantly to the dynamics of these neurons. Preliminary results indicate some differences in the channel complement of the models for different classes of HVC neurons, which accords with expectations from the biology. Whereas the model for each cell is incomplete (representing only the somatic compartment, and likely to be missing classes of channels that the real neurons possess), our approach opens the possibility to investigate in modeling the plausibility of additional classes of channels the cell might possess, thus improving the models over time. These results provide an important foundational basis for building biologically realistic network models, such as the one in HVC that contributes to the process of song production and developmental vocal learning in songbirds.

}, keywords = {brain, Data assimilation, dynamical estimation, gated potassium channels, generation, hvc neurons, Ion channel properties, localization, Neuronal dynamics, sequence, single, song, system, zebra finch}, isbn = {0340-1200}, doi = {10.1007/s00422-014-0615-5}, url = {Estimating the behavior of a network of neurons requires accurate models of the individual neurons along with accurate characterizations of the connections among them. Whereas for a single cell, measurements of the intracellular voltage are technically feasible and sufficient to characterize a useful model of its behavior, making sufficient numbers of simultaneous intracellular measurements to characterize even small networks is infeasible. This paper builds on prior work on single neurons to explore whether knowledge of the time of spiking of neurons in a network, once the nodes (neurons) have been characterized biophysically, can provide enough information to usefully constrain the functional architecture of the network: the existence of synaptic links among neurons and their strength. Using standardized voltage and synaptic gating variable waveforms associated with a spike, we demonstrate that the functional architecture of a small network of model neurons can be established.

}, keywords = {algorithm, Data assimilation, networks, Neuronal dynamics}, isbn = {0340-1200}, doi = {10.1007/s00422-014-0601-y}, url = {We investigate the dynamics of a conductance-based neuron model coupled to a model of intracellular calcium uptake and release by the endoplasmic reticulum. The intracellular calcium dynamics occur on a time scale that is orders of magnitude slower than voltage spiking behavior. Coupling these mechanisms sets the stage for the appearance of chaotic dynamics, which we observe within certain ranges of model parameter values. We then explore the question of whether one can, using observed voltage data alone, estimate the states and parameters of the voltage plus calcium (V+Ca) dynamics model. We find the answer is negative. Indeed, we show that voltage plus another observed quantity must be known to allow the estimation to be accurate. We show that observing both the voltage time course V (t) and the intracellular Ca time course will permit accurate estimation, and from the estimated model state, accurate prediction after observations are completed. This sets the stage for how one will be able to use a more detailed model of V+Ca dynamics in neuron activity in the analysis of experimental data on individual neurons as well as functional networks in which the nodes (neurons) have these biophysical properties.

}, keywords = {brain, ca2+ oscillations, deterministic chaos, excitable cell, in-vivo, inositol 1,4,5-trisphosphate, model, rat hepatocytes, release, zebra finch}, isbn = {1539-3755}, doi = {10.1103/PhysRevE.89.062714}, url = {Transferring information from observations to models of complex systems may meet impediments when the number of observations at any observation time is not sufficient. This is especially so when chaotic behavior is expressed. We show how to use time-delay embedding, familiar from nonlinear dynamics, to provide the information required to obtain accurate state and parameter estimates. Good estimates of parameters and unobserved states are necessary for good predictions of the future state of a model system. This method may be critical in allowing the understanding of prediction in complex systems as varied as nervous systems and weather prediction where insufficient measurements are typical. (C) 2014 Elsevier B.V. All rights reserved.

}, keywords = {Data assimilation, flow, identification, model, number, observability, synchronization, system, time series analysis, time-series}, isbn = {0375-9601}, doi = {10.1016/j.physleta.2014.01.027}, url = {The authors consider statistical ensemble data assimilation for a one-layer shallow-water equation in a twin experiment: data are generated by an N x N enstrophy-conserving grid integration scheme along with an Ekman vertical velocity at the bottom of an Ekman layer driving the flow and Rayleigh and eddy viscosity dissipation damping the flow. Data are generated for N = 16 and the chaotic flow that results is analyzed. This analysis is performed in a path-integral formulation of the data assimilation problem. These path integrals are estimated by a Monte Carlo method using a Metropolis Hastings algorithm. The authors{\textquoteright} concentration is on the number of measurements L-c that must be assimilated by the model to allow accurate estimation of the full state of the model at the end of an observation window. It is found that for this shallow-water flow approximately 70\% of the full set of state variables must be observed to accomplish either goal. The number of required observations is determined by examining the number needed to synchronize the observed data L-c and the model output when L data streams are assimilated by the model. Synchronization occurs when L \>= L-c and the correct selection of which L-c data are observed is made. If the number of observations is too small, so synchronization does not occur, or the selection of observations does not lead to synchronization of the data with the model output, state estimates during and at the end of the observation window and predictions beyond the observation window are inaccurate.

}, keywords = {dynamics, state, systems}, isbn = {0027-0644}, doi = {10.1175/mwr-d-12-00103.1}, url = {Hodgkin-Huxley (HH) models of neuronal membrane dynamics consist of a set of nonlinear differential equations that describe the time-varying conductance of various ion channels. Using observations of voltage alone we show how to estimate the unknown parameters and unobserved state variables of an HH model in the expected circumstance that the measurements are noisy, the model has errors, and the state of the neuron is not known when observations commence. The joint probability distribution of the observed membrane voltage and the unobserved state variables and parameters of these models is a path integral through the model state space. The solution to this integral allows estimation of the parameters and thus a characterization of many biological properties of interest, including channel complement and density, that give rise to a neuron{\textquoteright}s electrophysiological behavior. This paper describes a method for directly evaluating the path integral using a Monte Carlo numerical approach. This provides estimates not only of the expected values of model parameters but also of their posterior uncertainty. Using test data simulated from neuronal models comprising several common channels, we show that short (\< 50 ms) intracellular recordings from neurons stimulated with a complex time-varying current yield accurate and precise estimates of the model parameters as well as accurate predictions of the future behavior of the neuron. We also show that this method is robust to errors in model specification, supporting model development for biological preparations in which the channel expression and other biophysical properties of the neurons are not fully known.

}, keywords = {Chain Monte Carlo, Data assimilation, Ion channel properties, Markov, Neuronal dynamics}, isbn = {0340-1200}, doi = {10.1007/s00422-012-0487-5}, url = {Neuroscientists often propose detailed computational models to probe the properties of the neural systems they study. With the advent of neuromorphic engineering, there is an increasing number of hardware electronic analogs of biological neural systems being proposed as well. However, for both biological and hardware systems, it is often difficult to estimate the parameters of the model so that they are meaningful to the experimental system under study, especially when these models involve a large number of states and parameters that cannot be simultaneously measured. We have developed a procedure to solve this problem in the context of interacting neural populations using a recently developed dynamic state and parameter estimation (DSPE) technique. This technique uses synchronization as a tool for dynamically coupling experimentally measured data to its corresponding model to determine its parameters and internal state variables. Typically experimental data are obtained from the biological neural system and the model is simulated in software; here we show that this technique is also efficient in validating proposed network models for neuromorphic spike-based very large-scale integration (VLSI) chips and that it is able to systematically extract network parameters such as synaptic weights, time constants, and other variables that are not accessible by direct observation. Our results suggest that this method can become a very useful tool formodel-based identification and configuration of neuromorphic multichip VLSI systems.

}, keywords = {analog vlsi, cortical network model, fire, inhibition, neural-networks, neurons, persistent activity, spiking neurons, synapses, synchronization, working-memory}, isbn = {0899-7667}, url = {The answers to data assimilation questions can be expressed as path integrals over all possible state and parameter histories. We show how these path integrals can be evaluated numerically using a Markov Chain Monte Carlo method designed to run in parallel on a graphics processing unit (GPU). We demonstrate the application of the method to an example with a transmembrane voltage time series of a simulated neuron as an input, and using a Hodgkin-Huxley neuron model. By taking advantage of GPU computing, we gain a parallel speedup factor of up to about 300, compared to an equivalent serial computation on a CPU, with performance increasing as the length of the observation time used for data assimilation increases. (C) 2011 Elsevier Inc. All rights reserved.

}, keywords = {aqueous-solutions, Data assimilation, GPU computing, Hodgkin-Huxley, integral Monte Carlo, parameter-estimation, Path, sampling methods, State and parameter estimation, systems}, isbn = {0021-9991}, doi = {10.1016/j.jcp.2011.07.015}, url = {We present a method for using measurements of membrane voltage in individual neurons to estimate the parameters and states of the voltage-gated ion channels underlying the dynamics of the neuron{\textquoteright}s behavior. Short injections of a complex time-varying current provide sufficient data to determine the reversal potentials, maximal conductances, and kinetic parameters of a diverse range of channels, representing tens of unknown parameters and many gating variables in a model of the neuron{\textquoteright}s behavior. These estimates are used to predict the response of the model at times beyond the observation window. This method of data assimilation extends to the general problem of determining model parameters and unobserved state variables from a sparse set of observations, and may be applicable to networks of neurons. We describe an exact formulation of the tasks in nonlinear data assimilation when one has noisy data, errors in the models, and incomplete information about the state of the system when observations commence. This is a high dimensional integral along the path of the model state through the observation window. In this article, a stationary path approximation to this integral, using a variational method, is described and tested employing data generated using neuronal models comprising several common channels with Hodgkin-Huxley dynamics. These numerical experiments reveal a number of practical considerations in designing stimulus currents and in determining model consistency. The tools explored here are computationally efficient and have paths to parallelization that should allow large individual neuron and network problems to be addressed.

}, keywords = {algorithm, Data assimilation, Ion channel properties, Neuronal dynamics, parameter-estimation, state}, isbn = {0340-1200}, doi = {10.1007/s00422-011-0459-1}, url = {The process of transferring information from observations of a dynamical system to estimate the fixed parameters and unobserved states of a system model can be formulated as the evaluation of a discrete-time path integral in model state space. The observations serve as a guiding {\textquoteright}potential{\textquoteright} working with the dynamical rules of the model to direct system orbits in state space. The path-integral representation permits direct numerical evaluation of the conditional mean path through the state space as well as conditional moments about this mean. Using a Monte Carlo method for selecting paths through state space, we show how these moments can be evaluated and demonstrate in an interesting model system the explicit influence of the role of transfer of information from the observations. We address the question of how many observations are required to estimate the unobserved state variables, and we examine the assumptions of Gaussianity of the underlying conditional probability. Copyright (C) 2010 Royal Meteorological Society

}, keywords = {4d-var, Data assimilation, ensemble kalman filter, identification, model-error, nonlinear-systems, path integral, simulation}, isbn = {0035-9009}, doi = {10.1002/qj.690}, url = {Throughout the brain, neurons encode information in fundamental units of spikes. Each spike represents the combined thresholding of synaptic inputs and intrinsic neuronal dynamics. Here, we address a basic question of spike train formation: how do perithreshold synaptic inputs perturb the output of a spiking neuron? We recorded from single entorhinal principal cells in vitro and drove them to spike steadily at similar to 5 Hz (theta range) with direct current injection, then used a dynamic-clamp to superimpose strong excitatory conductance inputs at varying rates. Neurons spiked most reliably when the input rate matched the intrinsic neuronal firing rate. We also found a striking tendency of neurons to preserve their rates and coefficients of variation, independently of input rates. As mechanisms for this rate maintenance, we show that the efficacy of the conductance inputs varied with the relationship of input rate to neuronal firing rate, and with the arrival time of the input within the natural period. Using a novel method of spike classification, we developed a minimal Markov model that reproduced the measured statistics of the output spike trains and thus allowed us to identify and compare contributions to the rate maintenance and resonance. We suggest that the strength of rate maintenance may be used as a new categorization scheme for neuronal response and note that individual intrinsic spiking mechanisms may play a significant role in forming the rhythmic spike trains of activated neurons; in the entorhinal cortex, individual pacemakers may dominate production of the regional theta rhythm.

}, keywords = {cat neocortical neurons, channels, coding, cortical-neurons, differential electroresponsiveness, dynamics, entorhinal cortex, ii stellate cells, integration, ionic mechanisms, layer-ii, rat, reliability, resonance, spiking, theta}, isbn = {0953-816X}, doi = {10.1111/j.1460-9568.2010.07455.x}, url = {In variational formulations of data assimilation, the estimation of parameters or initial state values by a search for a minimum of a cost function can be hindered by the numerous local minima in the dependence of the cost function on those quantities. We argue that this is a result of instability on the synchronization manifold where the observations are required to match the model outputs in the situation where the data and the model are chaotic. The solution to this impediment to estimation is given as controls moving the positive conditional Lyapunov exponents on the synchronization manifold to negative values and adding to the cost function a penalty that drives those controls to zero as a result of the optimization process implementing the assimilation. This is seen as the solution to the proper size of {\textquoteright}nudging{\textquoteright} terms: they are zero once the estimation has been completed, leaving only the physics of the problem to govern forecasts after the assimilation window. We show how this procedure, called Dynamical State and Parameter Estimation (DSPE), works in the case of the Lorenz96 model with nine dynamical variables. Using DSPE, we are able to accurately estimate the fixed parameter of this model and all of the state variables, observed and unobserved, over an assimilation time interval [0, T]. Using the state variables at T and the estimated fixed parameter, we are able to accurately forecast the state of the model for t \> T to those times where the chaotic behaviour of the system interferes with forecast accuracy. Copyright (C) 2010 Royal Meteorological Society

}, keywords = {adaptive-control, chaotic systems, conditional Lyapunov exponent, DSPE, model, observer, parameter estimation, parameter-estimation, predictive control, scale constrained optimization, sqp algorithm, synchronization, time-series, unstable synchronization manifold}, isbn = {0035-9009}, doi = {10.1002/qj.600}, url = {Gibb L, Gentner TQ, Abarbanel HDI. Brain stem feedback in a computational model of birdsong sequencing. J Neurophysiol 102: 1763-1778, 2009. First published June 24, 2009; doi:10.1152/jn.91154.2008. Uncovering the roles of neural feedback in the brain is an active area of experimental research. In songbirds, the telencephalic premotor nucleus HVC receives neural feedback from both forebrain and brain stem areas. Here we present a computational model of birdsong sequencing that incorporates HVC and associated nuclei and builds on the model of sparse bursting presented in our preceding companion paper. Our model embodies the hypotheses that 1) different networks in HVC control different syllables or notes of birdsong, 2) interneurons in HVC not only participate in sparse bursting but also provide mutual inhibition between networks controlling syllables or notes, and 3) these syllable networks are sequentially excited by neural feedback via the brain stem and the afferent thalamic nucleus Uva, or a similar feedback pathway. We discuss the model{\textquoteright}s ability to unify physiological, behavioral, and lesion results and we use it to make novel predictions that can be tested experimentally. The model suggests a neural basis for sequence variations, shows that stimulation in the feedback pathway may have different effects depending on the balance of excitation and inhibition at the input to HVC from Uva, and predicts deviations from uniform expansion of syllables and gaps during HVC cooling.

}, keywords = {auditory responses, columba-livia, control-system, forebrain, lesions, model, neural-network, neurons, nucleus robustus-archistriatalis, taeniopygia-guttata, vocal control, zebra finch song}, isbn = {0022-3077}, doi = {10.1152/jn.91154.2008}, url = {Gibb L, Gentner TQ, Abarbanel HDI. Inhibition and recurrent excitation in a computational model of sparse bursting in song nucleus HVC. J Neurophysiol 102: 1748-1762, 2009. First published June 10, 2009; doi:10.1152/jn.00670.2007. The telencephalic premotor nucleus HVC is situated at a critical point in the pattern-generating premotor circuitry of oscine songbirds. A striking feature of HVC{\textquoteright}s premotor activity is that its projection neurons burst extremely sparsely. Here we present a computational model of HVC embodying several central hypotheses: 1) sparse bursting is generated in bistable groups of recurrently connected robust nucleus of the arcopallium (RA)- projecting (HVC(RA)) neurons; 2) inhibitory interneurons terminate bursts in the HVC(RA) groups; and 3) sparse sequences of bursts are generated by the propagation of waves of bursting activity along networks of HVC(RA) neurons. Our model of sparse bursting places HVC in the context of central pattern generators and cortical networks using inhibition, recurrent excitation, and bistability. Importantly, the unintuitive result that inhibitory interneurons can precisely terminate the bursts of HVC(RA) groups while showing relatively sustained activity throughout the song is made possible by a specific constraint on their connectivity. We use the model to make novel predictions that can be tested experimentally.

}, keywords = {area hvc, forebrain, gaba-a, neural-network model, neurons, sequence generation, synaptic currents, timing-dependent plasticity, vocal center, zebra finch song}, isbn = {0022-3077}, doi = {10.1152/jn.00670.2007}, url = {Data assimilation is a problem in estimating the fixed parameters and state of a model of an observed dynamical system as it receives inputs from measurements passing information to the model. Using methods developed in statistical physics, we present effective actions and equations of motion for the mean orbits associated with the temporal development of a dynamical model when it has errors, there is uncertainty in its initial state, and it receives information from noisy measurements. If there are statistical dependences among errors in the measurements they can be included in this approach. (C) 2009 Elsevier B.V. All rights reserved.

}, keywords = {dynamics, model, nonlinear-systems, parameter-estimation, state}, isbn = {0375-9601}, doi = {10.1016/j.physleta.2009.08.072}, url = {Measures of multiple spike train synchrony are essential in order to study issues such as spike timing reliability, network synchronization, and neuronal coding. These measures can broadly be divided in multivariate measures and averages over bivariate measures. One of the most recent bivariate approaches, the ISI-distance, employs the ratio of instantaneous interspike intervals (ISIs). In this study we propose two extensions of the ISI-distance, the straightforward averaged bivariate ISI-distance and the multivariate ISI-diversity based on the coefficient of variation. Like the original measure these extensions combine many properties desirable in applications to real data. In particular, they are parameter-free, time scale independent, and easy to visualize in a time-resolved manner, as we illustrate with in vitro recordings from a cortical neuron. Using a simulated network of Hindemarsh-Rose neurons as a controlled configuration we compare the performance of our methods in distinguishing different levels of multi-neuron spike train synchrony to the performance of six other previously published measures. We show and explain why the averaged bivariate measures perform better than the multivariate ones and why the multivariate ISI-diversity is the best performer among the multivariate methods. Finally, in a comparison against standard methods that rely on moving window estimates, we use single-unit monkey data to demonstrate the advantages of the instantaneous nature of our methods. (C) 2009 Elsevier B.V. All rights reserved.

}, keywords = {cells, Clustering, coding, cortex, ISI-distance, metric-space analysis, model, Neuronal, neuronal networks, reliability, Spike trains, synchronization, systems, time series analysis, variability}, isbn = {0165-0270}, doi = {10.1016/j.jneumeth.2009.06.039}, url = {We examine the use of synchronization as a mechanism for extracting parameter and state information from experimental systems. We focus on important aspects of this problem that have received little attention previously and we explore them using experiments and simulations with the chaotic Colpitts oscillator as an example system. We explore the impact of model imperfection on the ability to extract valid information from an experimental system. We compare two optimization methods: an initial value method and a constrained method. Each of these involves coupling the model equations to the experimental data in order to regularize the chaotic motions on the synchronization manifold. We explore both time-dependent and time-independent coupling and discuss the use of periodic impulse coupling. We also examine both optimized and fixed (or manually adjusted) coupling. For the case of an optimized time-dependent coupling function u(t) we find a robust structure which includes sharp peaks and intervals where it is zero. This structure shows a strong correlation with the location in phase space and appears to depend on noise, imperfections of the model, and the Lyapunov direction vectors. For time-independent coupling we find the counterintuitive result that often the optimal rms error in fitting the model to the data initially increases with coupling strength. Comparison of this result with that obtained using simulated data may provide one measure of model imperfection. The constrained method with time-dependent coupling appears to have benefits in synchronizing long data sets with minimal impact, while the initial value method with time-independent coupling tends to be substantially faster, more flexible, and easier to use. We also describe a method of coupling which is useful for sparse experimental data sets. Our use of the Colpitts oscillator allows us to explore in detail the case of a system with one positive Lyapunov exponent. The methods we explored are easily extended to driven systems such as neurons with time-dependent injected current. They are expected to be of value in nonchaotic systems as well. Software is available on request.

}, keywords = {adaptive synchronization, algorithm, colpitts oscillators, dynamical-systems, identification, observers, series, time nonlinear-systems, tracking, uncertain}, isbn = {1539-3755}, doi = {10.1103/PhysRevE.80.016201}, url = {A snapshot from the steady-state response of chemical sensors conveys, on average, more mature and relevant information regarding the analyte than a snapshot from the transient can provide. Nevertheless, time constraints in many applications make it infeasible to wait for and extract steady-state features. Substituting them by transient ones is the only viable solution to accelerate odor processing. Based on measurements recorded from metal-oxide sensors, we point to a correlation between a transient feature and the steady-state resistance that are observed in response to fixed analyte concentration. We utilize this correlation to expedite standard quantification and classification substantially while ensuring the performance that the steady-state feature can provide. (C) 2008 Elsevier B.V. All rights reserved.

}, keywords = {chemical sensors, classification, feature-extraction, hotplate gas sensors, identification, mixtures, models, Signal processing, steady-state, Transients}, isbn = {0925-4005}, doi = {10.1016/j.snb.2008.10.065}, url = {The speed and accuracy of odor recognition in insects can hardly be resolved by the raw descriptors provided by olfactory receptors alone due to their slow time constant and high variability. The animal overcomes these barriers by means of the antennal lobe (AL) dynamics, which consolidates the classificatory information in receptor signal with a spatiotemporal code that is enriched in odor sensitivity, particularly in its transient. Inspired by this fact, we propose an easily implementable AL-like network and show that it significantly expedites and enhances the identification of odors from slow and noisy artificial polymer sensor responses. The device owes its efficiency to two intrinsic mechanisms: inhibition ( which triggers a competition) and integration ( due to the dynamical nature of the network). The former functions as a sharpening filter extracting the features of receptor signal that favor odor separation, whereas the latter implements a working memory by accumulating the extracted features in trajectories. This cooperation boosts the odor specificity during the receptor transient, which is essential for fast odor recognition.

}, keywords = {chemical sensors, drosophila, electronic nose, model, mushroom body, neural assemblies, odor representations, olfactory network dynamics, short-term-memory, system}, isbn = {0899-7667}, doi = {10.1162/neco.2008.05-08-780}, url = {We discuss the problem of determining unknown fixed parameters and unobserved state variables in nonlinear models of a dynamical system using observed time series data from that system. In dynamical terms this requires synchronization of the experimental data with time series output from a model. If the model and the experimental system are chaotic, the synchronization manifold, where the data time series is equal to the model time series, may be unstable. If this occurs, then small perturbations in parameters or state variables can lead to large excursions near the synchronization manifold and produce a very complex surface in any estimation metric for those quantities. Coupling the experimental information to the model dynamics can lead to a stabilization of this manifold by reducing a positive conditional Lyapunov exponent (CLE) to a negative value. An approach called dynamical parameter estimation (DPE) addresses these instabilities and regularizes them, allowing for smooth surfaces in the space of parameters and initial conditions. DPE acts as an observer in the control systems sense, and because the control is systematically removed through an optimization process, it acts as an estimator of the unknown model parameters for the desired physical model without external control. Examples are given from several systems including an electronic oscillator, a neuron model, and a very simple geophysical model. In networks and larger dynamical models one may encounter many positive CLEs, and we investigate a general approach for estimating fixed model parameters and unobserved system states in this situation.

}, keywords = {chaotic systems, conditional Lyapunov, Data assimilation, estimation in nonlinear systems, exponents, identification, models, nonlinear prediction, observability, observer, scale constrained optimization, series, single-neuron, sqp algorithm, synchronization, synchronization manifold, time nonlinear-systems}, isbn = {1536-0040}, doi = {10.1137/090749761}, url = {Sensory systems pass information about an animal{\textquoteright}s environment to higher nervous system units through sequences of action potentials. When these action potentials have essentially equivalent wave forms, all information is contained in the interspike intervals (ISIs) of the spike sequence. How do neural circuits recognize and read these ISI sequences? We address this issue of temporal sequence learning by a neuronal system utilizing spike timing dependent plasticity (STDP). We present a general architecture of neural circuitry that can perform the task of ISI recognition. The essential ingredients of this neural circuit, which we refer to as "interspike interval recognition unit" (IRU) are (i) a spike selection unit, the function of which is to selectively distribute input spikes to downstream IRU circuitry; (ii) a time-delay unit that can be tuned by STDP; and (iii) a detection unit, which is the output of the IRU and a spike from which indicates successful ISI recognition by the IRU. We present two distinct configurations for the time-delay circuit within the IRU using excitatory and inhibitory synapses, respectively, to produce a delayed output spike at time t(0)+tau(R) in response to the input spike received at time t(0). R is the tunable parameter of the time-delay circuit that controls the timing of the delayed output spike. We discuss the forms of STDP rules for excitatory and inhibitory synapses, respectively, that allow for modulation of R for the IRU to perform its task of ISI recognition. We then present two specific implementations for the IRU circuitry, derived from the general architecture that can both learn the ISIs of a training sequence and then recognize the same ISI sequence when it is presented on subsequent occasions.

}, keywords = {barn, correlated activity, hippocampal-neurons, information, model, owl, song system, sparrow, synchrony, term synaptic plasticity, timing-dependent plasticity, white-crowned}, isbn = {1539-3755}, doi = {10.1103/PhysRevE.78.031918}, url = {Synchronization between experimental observations and a dynamical model with undetermined parameters can assist in completing the specification of the model parameters. The quality of the synchronization, a cost function to be minimized, typically depends on the difference between the data time series and the model time series. If the coupling between the data and the model is too strong, this cost function is small for any data and any model, and the variation of the cost function with respect to the parameters of interest is too small to permit selection of a value of the parameters. If the coupling is too small, synchronization is lost. We introduce two methods for balancing the competing desires of a small cost function and the numerical ability to determine parameters accurately. One method of {\textquoteright}balanced{\textquoteright} synchronization adds a requirement that the conditional Lyapunov exponent of the model system, conditioned on being driven by the data, remain negative but small. The other method allows the coupling to vary in time according to the error in synchronization. This second method succeeds because the data and the model exhibit generalized synchronization in the region where the parameters of the model are well determined. (c) 2007 Elsevier B.V. All rights reserved.

}, keywords = {chaotic systems, generalized synchronization, identification, parameter estimation, synchronization, time-series}, isbn = {0375-9601}, doi = {10.1016/j.physleta.2007.10.097}, url = {Using synchronization between observations and a model with undetermined parameters is a natural way to complete the specification of the model. The quality of the synchronization, a cost function to be minimized, typically is evaluated by a least squares difference between the data time series and the model time series. If the coupling between the data and the model is too strong, this cost function is small for any data and any model and the variation of the cost function with respect to the parameters of interest is too small to permit selection of an optimal value of the parameters. We introduce two methods for balancing the competing desires of a small cost function for the quality of the synchronization and the numerical ability to determine parameters accurately. One method of "balanced" synchronization adds to the synchronization cost function a requirement that the conditional Lyapunov exponent of the model system, conditioned on being driven by the data remain negative but small in magnitude. The other method allows the coupling between the data and the model to vary in time according to the error in synchronization. This method succeeds because the data and the model exhibit generalized synchronization in the region where the parameters of the model are well determined. Examples are explored which have deterministic chaos with and without noise in the data signal.

}, keywords = {adaptive, coupled chaotic systems, generalized synchronization, identification, synchronization, time-series, uncertain}, isbn = {1539-3755}, doi = {10.1103/PhysRevE.77.016208}, url = {In verifying and validating models of nonlinear processes it is important to incorporate information from observations in an efficient manner. Using the idea of synchronization of nonlinear dynamical systems, we present a framework for connecting a data signal with a model in a way that minimizes the required coupling yet allows the estimation of unknown parameters in the model. The need to evaluate unknown parameters in models of nonlinear physical, biophysical, and engineering systems occurs throughout the development of phenomenological or reduced models of dynamics. Our approach builds on existing work that uses synchronization as a tool for parameter estimation. We address some of the critical issues in that work and provide a practical framework for finding an accurate solution. In particular, we show the equivalence of this problem to that of tracking within an optimal control framework. This equivalence allows the application of powerful numerical methods that provide robust practical tools for model development and validation. (c) 2007 Elsevier B.V All rights reserved.

}, keywords = {chaotic systems, identification, optimization, parameter estimation, synchronization, time-series}, isbn = {0375-9601}, doi = {10.1016/j.physleta.2007.12.051}, url = {Estimating the degree of synchrony or reliability between two or more spike trains is a frequent task in both experimental and computational neuroscience. In recent years, many different methods have been proposed that typically compare the timing of spikes on a certain time scale to be optimized by the analyst. Here, we propose the ISI-distance, a simple complementary approach that extracts information from the interspike intervals by evaluating the ratio of the instantaneous firing rates. The method is parameter free, time scale independent and easy to visualize as illustrated by an application to real neuronal spike trains obtained in vitro from rat slices. In a comparison with existing approaches on spike trains extracted from a simulated Hindemarsh-Rose network, the ISI-distance performs as well as the best time-scale-optimized measure based on spike timing. (c) 2007 Elsevier B.V. All rights reserved.

}, keywords = {cells, Clustering, cortex, distance, event synchronization, frequency, networks, neuronal coding, patterns, phase, precision, reliability, Spike trains, systems, time series analysis}, isbn = {0165-0270}, doi = {10.1016/j.jneumeth.2007.05.031}, url = {Information theory provides a natural set of statistics to quantify the amount of knowledge a neuron conveys about a stimulus. A related work (Kennel, Shlens, Abarbanel, \& Chichilnisky, 2005) demonstrated how to reliably estimate, with a Bayesian confidence interval, the entropy rate from a discrete, observed time series. We extend this method to measure the rate of novel information that a neural spike train encodes about a stimulus-the average and specific mutual information rates. Our estimator makes few assumptions about the underlying neural dynamics, shows excellent performance in experimentally relevant regimes, and uniquely provides confidence intervals bounding the range of information rates compatible with the observed spike train. We validate this estimator with simulations of spike trains and highlight how stimulus parameters affect its convergence in bias and variance. Finally, we apply these ideas to a recording from a guinea pig retinal ganglion cell and compare results to a simple linear decoder.

}, keywords = {code, cortical-neurons, entropy estimation, lateral geniculate-nucleus, mutual information, precision, primate, responses, retina, retinal ganglion-cells, visual information}, isbn = {0899-7667}, doi = {10.1162/neco.2007.19.7.1683}, url = {Background: Optical indicators of cytosolic calcium levels have become important experimental tools in systems and cellular neuroscience. Indicators are known to interfere with intracellular calcium levels by acting as additional buffers, and this may strongly alter the time-course of various dynamical variables to be measured. Results: By investigating the underlying reaction kinetics, we show that in some ranges of kinetic parameters one can explicitly link the time dependent indicator signal to the time-course of the calcium influx, and thus, to the unperturbed calcium level had there been no indicator in the cell.

}, isbn = {1742-4682}, doi = {10.1186/1742-4682-4-7}, url = {Dynamical modeling of neural systems and brain functions has a history of success over the last half century. This includes, for example, the explanation and prediction of some features of neural rhythmic behaviors. Many interesting dynamical models of learning and memory based on physiological experiments have been suggested over the last two decades. Dynamical models even of consciousness now exist. Usually these models and results are based on traditional approaches and paradigms of nonlinear dynamics including dynamical chaos. Neural systems are, however, an unusual subject for nonlinear dynamics for several reasons: (i) Even the simplest neural network, with only a few neurons and synaptic connections, has an enormous number of variables and control parameters. These make neural systems adaptive and flexible, and are critical to their biological function. (ii) In contrast to traditional physical systems described by well-known basic principles, first principles governing the dynamics of neural systems are unknown. (iii) Many different neural systems exhibit similar dynamics despite having different architectures and different levels of complexity. (iv) The network architecture and connection strengths are usually not known in detail and therefore the dynamical analysis must, in some sense, be probabilistic. (v) Since nervous systems are able to organize behavior based on sensory inputs, the dynamical modeling of these systems has to explain the transformation of temporal information into combinatorial or combinatorial-temporal codes, and vice versa, for memory and recognition. In this review these problems are discussed in the context of addressing the stimulating questions: What can neuroscience learn from nonlinear dynamics, and what can nonlinear dynamics learn from neuroscience?

}, keywords = {central pattern, generator, neural-networks, neuronal networks, rhythmic motor patterns, short-term-memory, synaptic plasticity, timing-dependent plasticity, turtle visual-cortex, winner-take-all, working-memory}, isbn = {0034-6861}, doi = {10.1103/RevModPhys.78.1213}, url = {Actions of inhibitory interneurons organize and modulate many neuronal processes, yet the mechanisms and consequences of plasticity of inhibitory synapses remain poorly understood. We report on spike-timing-dependent plasticity of inhibitory synapses in the entorhinal cortex. After pairing presynaptic stimulations at time t(pre) with evoked postsynaptic spikes at time t(post) under pharmacological blockade of excitation we found, via whole cell recordings, an asymmetrical timing rule for plasticity of the remaining inhibitory responses. Strength of response varied as a function of the time interval Delta t = t(post) - t(pre): for Delta t \> 0 inhibitory responses potentiated, peaking at a delay of 10 ms. For Delta t \< 0, the synaptic coupling depressed, again with a maximal effect near 10 ms of delay. We also show that changes in synaptic strength depend on changes in intracellular calcium concentrations and demonstrate that the calcium enters the postsynaptic cell through voltage-gated channels. Using network models, we demonstrate how this novel form of plasticity can sculpt network behavior efficiently and with remarkable flexibility.

}, keywords = {barrel cortex, cotransporter kcc2, differential electroresponsiveness, gabaergic synapses, hippocampus, layer-ii, long-term potentiation, neurons, pyramidal cells, rat, synaptic plasticity}, isbn = {0022-3077}, doi = {10.1152/jn.00551.2006}, url = {By using multi-electrode arrays or optical imaging, investigators can now record from many individual neurons in various parts of nervous systems simultaneously while an animal performs sensory, motor or cognitive tasks. Given the large multidimensional datasets that are now routinely generated, it is often not obvious how to find meaningful results within the data. The analysis of neuronal-population recordings typically involves two steps: the extraction of relevant dynamics from neural data, and then use of the dynamics to classify and discriminate features of a stimulus or behavior. We focus on the application of techniques that emphasize interactions among the recorded neurons rather than using just the correlations between individual neurons and a perception or a behavior. An understanding of modern analysis techniques is crucially important for researchers interested in the co-varying activity among populations of neurons or even brain regions.

}, keywords = {brain, classification, cortex, decision-making, discriminant-analysis, ensembles, olfactory-bulb, principal component, representations, signals}, isbn = {0959-4388}, doi = {10.1016/j.conb.2006.03.014}, url = {Sensory systems present environmental information to central nervous system as sequences of action potentials or spikes. How do animals recognize these sequences carrying information about their world? We present a biologically inspired neural circuit designed to enable spike pattern recognition. This circuit is capable of training itself on a given interspike interval (ISI) sequence and is then able to respond to presentations of the same sequence. The essential ingredients of the recognition circuit are (a) a tunable time delay circuit, (b) a spike selection unit, and (c) a tuning mechanism using spike timing dependent plasticity of inhibitory synapses. We have investigated this circuit using Hodgkin-Huxley neuron models connected by realistic excitatory and inhibitory synapses. It is robust in the presence of noise represented as jitter in the spike times of the ISI sequence.

}, keywords = {information, neurons, visual-cortex}, isbn = {0031-9007}, doi = {10.1103/PhysRevLett.96.148104}, url = {Experimental observations on synaptic plasticity at individual glutamatergic synapses from the CA3 Shaffer collateral pathway onto CA1 pyramidal cells in the hippocampus suggest that the transitions in synaptic strength occur among discrete levels at individual synapses [C. C. H. Petersen , Proc. Natl. Acad. Sci. USA 85, 4732 (1998); O{\textquoteright}Connor, Wittenberg, and Wang, D. H. O{\textquoteright}Connor , Proc. Natl. Acad. Sci. USA (to be published); J. M. Montgomery and D. V. Madison, Trends Neurosci. 27, 744 (2004)]. This happens for both long term potentiation (LTP) and long term depression (LTD) induction protocols. O{\textquoteright}Connor, Wittenberg, and Wang have argued that three states would account for their observations on individual synapses in the CA3-CA1 pathway. We develop a quantitative model of this three-state system with transitions among the states determined by a competition between kinases and phosphatases shown by D. H. O{\textquoteright}Connor , to be determinant of LTP and LTD, respectively. Specific predictions for various plasticity protocols are given by coupling this description of discrete synaptic alpha-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid (AMPA) receptor ligand gated ion channel conductance changes to a model of postsynaptic membrane potential and associated intracellular calcium fluxes to yield the transition rates among the states. We then present various LTP and LTD induction protocols to the model system and report the resulting whole cell changes in AMPA conductance. We also examine the effect of our discrete state synaptic plasticity model on the synchronization of realistic oscillating neurons. We show that one-to-one synchronization is enhanced by the plasticity we discuss here and the presynaptic and postsynaptic oscillations are in phase. Synaptic strength saturates naturally in this model and does not require artificial upper or lower cutoffs, in contrast to earlier models of plasticity.See correction: 10.1103/PhysRevE.72.069903

}, keywords = {biophysical model, dependence, enhancement, long-term depression, ltp, neurons, potentiation, receptors, specificity, synchronization}, isbn = {1539-3755}, doi = {10.1103/PhysRevE.72.031914}, url = {Low-dimensional oscillators are a valuable model for the neuronal activity of isolated neurons. When coupled, the self-sustained oscillations of individual free oscillators are replaced by a collective network dynamics. Here, dynamical features of such a network, consisting of three electronic implementations of the Hindmarsh-Rose mathematical model of bursting neurons, are compared to those of a biological neural motor system, specifically the pyloric CPG of the crustacean stomatogastric nervous system. We demonstrate that the network of electronic neurons exhibits realistic synchronized bursting behavior comparable to the biological system. Dynamical properties were analyzed by injecting sinusoidal currents into one of the oscillators. The temporal bursting structure of the electronic neurons in response to periodic stimulation is shown to bear a remarkable resemblance to that observed in the corresponding biological network. These findings provide strong evidence that coupled nonlinear oscillators realistically reproduce the network dynamics experimentally observed in assemblies of several neurons.

}, keywords = {bursting neurons, electronic neurons, entrainment, giant-axons, group, input, locking, mechanisms, model, neural modeling, nonlinear oscillators, pacemaker, patterns, periodic forcing, responses, synchronous behavior}, isbn = {0018-9294}, doi = {10.1109/tbme.2005.844272}, url = {The entropy rate quantifies the amount of uncertainty or disorder produced by any dynamical system. In a spiking neuron, this uncertainty translates into the amount of information potentially encoded and thus the subject of intense theoretical and experimental investigation. Estimating this quantity in observed, experimental data is difficult and requires a judicious selection of probabilistic models, balancing between two opposing biases. We use a model weighting principle originally developed for lossless data compression, following the minimum description length principle. This weighting yields a direct estimator of the entropy rate, which, compared to existing methods, exhibits significantly less bias and converges faster in simulation. With Monte Carlo techinques, we estimate a Bayesian confidence interval for the entropy rate. In related work, we apply these ideas to estimate the information rates between sensory stimuli and neural responses in experimental data (Shlens, Kennel, Abarbanel, \& Chichilnisky, in preparation).

}, keywords = {complexity, compression, mutual information, natural images, neural spike trains, probability-distributions, sequences, statistics, tree weighting method, visual information}, isbn = {0899-7667}, doi = {10.1162/0899766053723050}, url = {The pyloric network of the lobster stomatogastric ganglion is a prime example of an oscillatory neural circuit. In our previous study on the firing patterns of pyloric neurons we observed characteristic temporal structures termed {\textquoteright}interspike interval (ISI) signatures{\textquoteright} which were found to depend on the synaptic connectivity of the network. Dopamine, a well-known modulator of the pyloric network, is known to affect inhibitory synapses so it might also tune the fine temporal structure of intraburst spikes, a phenomenon not previously investigated. In the recent work we study the DA modulation of ISI patterns of two identified pyloric neurons in normal conditions and after blocking their glutamatergic synaptic connections. Dopamine (10-50 mu m) strongly regularizes the firing of the lateral pyloric (LP) and pyloric dilator (PD) neurons by increasing the reliability of recurrent spike patterns. The most dramatic effect is observed in the LP, where precisely replicated spike multiplets appear in a normally {\textquoteright}noisy{\textquoteright} neuron. The DA-induced regularization of intraburst spike patterns requires functional glutamatergic inputs to the LP neuron and this effect cannot be mimicked by simple intracellular depolarization. Inhibitory synaptic inputs arriving before the bursts are important factors in shaping the intraburst spike dynamics of both the PD and the LP neurons. Our data reveal a novel aspect of chemical neuromodulation in oscillatory neural networks. This effect sets in at concentrations lower than those affecting the overall burst pattern of the network. The sensitivity of intraburst spike dynamics to preceding synaptic inputs also suggests a novel method of temporal coding in neural bursters.

}, keywords = {amine modulation, currents, firing patterns, frequency, ganglion, information, interspike interval, lobster stomatogastric ganglion, motor-neuron, oscillation, panulirus-interruptus, pattern, pyloric network, signature, stomatogastric, transmission}, isbn = {0953-816X}, doi = {10.1111/j.1460-9568.2005.03894.x}, url = {We investigated decision-making in the leech nervous system by stimulating identical sensory inputs that sometimes elicit crawling and other times swimming. Neuronal populations were monitored with voltage-sensitive dyes after each stimulus. By quantifying the discrimination time of each neuron, we found single neurons that discriminate before the two behaviors are evident. We used principal component analysis and linear discriminant analysis to find populations of neurons that discriminated earlier than any single neuron. The analysis highlighted the neuron cell 208. Hyperpolarizing cell 208 during a stimulus biases the leech to swim; depolarizing it biases the leech to crawl or to delay swimming.

}, keywords = {behavioral choice, cortex, initiation, mechanisms, medicinal leech, neural basis, parietal, perceptual decision, pleurobranchaea, swim, targets}, isbn = {0036-8075}, doi = {10.1126/science.1103736}, url = {We show in a model of spiking neurons that synaptic plasticity in the mushroom bodies in combination with the general fan-in, fan-out properties of the early processing layers of the olfactory system might be sufficient to account for its efficient recognition of odors. For a large variety of initial conditions the model system consistently finds a working solution without any fine-tuning, and is, therefore, inherently robust. We demonstrate that gain control through the known feedforward inhibition of lateral horn interneurons increases the capacity of the system but is not essential for its general function. We also predict an upper limit for the number of odor classes Drosophila can discriminate based on the number and connectivity of its olfactory neurons.

}, keywords = {associative memory, coding, discrimination, drosophila mushroom body, fan-in, fan-out, honeybee, information, Kenyon cells, model, neural assemblies, olfaction, oscillatory dynamics, pattern recognition, projection neurons, synaptic convergence, synaptic plasticity}, isbn = {0340-1200}, doi = {10.1007/s00422-005-0019-7}, url = {The neural circuits of birdsong appear to utilize specific time delays in their operation. In particular, the anterior forebrain pathway (AFP) is implicated in an approximately 40- to 50- ms time delay, DeltaT, playing a role in the relative timing of premotor signals from the nucleus HVc to the nucleus robust nucleus of the archistratium (RA) and control/learning signals from the nucleus lateral magnocellular nucleus of the anterior neostratium (lMAN) to RA. Using a biophysical model of synaptic plasticity based on experiments on mammalian hippocampal and neocortical pyramidal neurons, we propose an understanding of this approximate to40- to 50- ms delay. The biophysical model describes the influence of Ca2+ influx into the postsynaptic RA cells through NMDA and AMPA receptors and the induction of LTP and LTD through complex metabolic pathways. The delay, DeltaT, between HVc --\> RA premotor signals and lMAN --\> RA control/learning signals plays an essential role in determining if synaptic plasticity is induced by signaling from each pathway into RA. If DeltaT is substantially larger than 40 ms, no plasticity is induced. If DeltaT is much less than 40 ms, only potentiation is expected. If DeltaT approximate to 40 ms, the sign of synaptic plasticity is sensitive to DeltaT. Our results suggest that changes in DeltaT may influence learning and maintenance of birdsong. We investigate the robustness of this result to noise and to the removal of the Ca2+ contribution from lMAN --\> RA NMDA receptors.

}, keywords = {avian basal ganglia, biophysical model, circuit, correlated activity, dendritic, forebrain, neurons, nucleus, songbird, spines, zebra finch}, isbn = {0340-1200}, doi = {10.1007/s00422-004-0495-1}, url = {The neuroethology of song learning, production, and maintenance in songbirds presents interesting similarities to human speech. We have developed a biophysical model of the manner in which song could be maintained in adult songbirds. This model may inform us about the human counterpart to these processes. In songbirds, signals generated in nucleus High Vocal center (HVc) follow a direct route along a premotor pathway to the robust nucleus of the archistriatum (RA) as well as an indirect route to RA through the anterior forebrain pathway (AFP): the neurons of RA are innervated from both sources. HVc expresses very sparse bursts of spikes having interspike intervals of about 2 ms. The expressions of these bursts arrive at the RA with a time difference DeltaT approximate to 50 +/- 10 ms between the two pathways. The observed combination of AMPA and NMDA receptors at RA projection neurons suggests that long-term potentiation and long-term depression can both be induced by spike timing plasticity through the pairing of the HVc and AFP signals. We present a dynamical model that stabilizes this synaptic plasticity through a feedback from the RA to the AFP using known connections. The stabilization occurs dynamically and is absent when the RA --\> AFP connection is removed. This requires a dynamical selection of DeltaT. The model does this, and DeltaT lies within the observed range. Our model represents an illustration of a functional consequence of activity-dependent plasticity directly connected with neuroethological observations. Within the model the parameters of the AFP, and thus the magnitude of DeltaT, can also be tuned to an unstable regime. This means that destabilization might be induced by neuromodulation of the AFP.

}, keywords = {avian basal ganglia, correlated activity, electrophysiological properties, finch, finch song system, learned vocalizations, long-term potentiation, neurons, relay, synaptic-transmission, thalamic nucleus, zebra}, isbn = {1539-3755}, doi = {10.1103/PhysRevE.70.051911}, url = {The motor pathway responsible for the complex vocalizations of songbirds has been extensively characterized, both in terms of intrinsic and synaptic physiology in vitro and in terms of the spatiotemporal patterns of neural activity in vivo. However, the relationship between the neural architecture of the song motor pathway and the acoustic features of birdsong is not well understood. Using a computational model of the song motor pathway and the songbird vocal organ, we investigate the relationship between song production and the neural connectivity of nucleus HVc ( used as a proper name) and the robust nucleus of the archistriatum ( RA). Drawing on recent experimental observations, our neural model contains a population of sequentially bursting HVc neurons driving the activity of a population of RA neurons. An important focus of our investigations is the contribution of intrinsic circuitry within RA to the acoustic output of the model. We find that the inclusion of inhibitory interneurons in the model can substantially influence the features of song syllables, and we illustrate the potential for subharmonic behavior in RA in response to forcing by HVc neurons. Our results demonstrate the association of specific acoustic features with specific neural connectivities and support the view that intrinsic circuitry within RA may play a critical role in generating the features of birdsong.

}, keywords = {hvc, inhibition, mechanisms, neurons, nucleus, plasticity, projections, song, vocal control, zebra finch}, isbn = {0022-3077}, doi = {10.1152/jn.01146.2003}, url = {We propose a theoretical framework for odor classification in the olfactory system of insects. The classification task is accomplished in two steps. The first is a transformation from the antennal lobe to the intrinsic Kenyon cells in the mushroom body. This transformation into a higher-dimensional space is an injective function and can be implemented without any type of learning at the synaptic connections. In the second step, the encoded odors in the intrinsic Kenyon cells are linearly classified in the mushroom body lobes. The neurons that perform this linear classification are equivalent to hyperplanes whose connections are tuned by local Hebbian learning and by competition due to mutual inhibition. We calculate the range of values of activity and size of the network required to achieve efficient classification within this scheme in insect olfaction. We are able to demonstrate that biologically plausible control mechanisms can accomplish efficient classification of odors.

}, keywords = {antennal lobes, apis-mellifera, associative memory, drosophila, fly brain, mushroom body, neural-networks, neurons, odor, representations, term-memory}, isbn = {0899-7667}, doi = {10.1162/089976604774201613}, url = {Spectrally broadband stimulation of neurons has been an effective method for studying their dynamic responses to simulated synaptic inputs. Previous studies with such stimulation were mostly based upon the direct intracellular injection of noisy current waveforms. In the present study we analyze and compare the firing output of various identified molluscan neurons to aperiodic, broadband current signals using three types of stimulus paradigms: 1. direct injection in current clamp mode, 2. conductance injection using electrotonic coupling of the input waveform to the neuron, and 3. conductance injection using a simulated chemical excitatory connection. The current waveforms were presented in 15 successive trials and the trial-to-trial variations of the spike responses were analyzed using peristimulus spike density functions. Comparing the responses of the neurons to the same type of input waveforms, we found that conductance injection resulted in more reliable and precise spike responses than direct current injection. The statistical parameters of the response spike trains depended on the spectral distribution of the input. The reliability increased with increasing cutoff frequency, while the temporal jitter of spikes changed in the opposite direction. Neurons with endogenous bursting displayed lower reproducibility in their responses to noisy waveforms when injected directly; however, they fired far more reliably and precisely when receiving the same waveforms as conductance inputs. The results show that molluscan neurons are capable of accurately reproducing their responses to synaptic inputs. Conductance injection provides an enhanced experimental technique for assessing the neurons{\textquoteright} spike timing reliability and it should be preferred over direct current injection of noisy waveforms. (C) 2004 IBRO. Published by Elsevier Ltd. All rights reserved.

}, keywords = {clamp, conductances, cortical-neurons, dynamic, dynamic clamp, firing pattern, frequency-dependence, interneurons, Lymnaea neurons, neocortical neurons, noisy stimulation, respiratory behavior, sensory systems, snail lymnaea-stagnalis, variability}, isbn = {0306-4522}, doi = {10.1016/j.neuroscience.2004.04.015}, url = {Sensory information is represented in a spatio-temporal code in the antennal lobe, the first processing stage of the olfactory system of insects. We propose a novel mechanism for decoding this information in the next processing stage, the mushroom body. The Kenyon cells in the mushroom body of insects exhibit lateral excitatory connections at their axons. We demonstrate that slow lateral excitation between Kenyon cells allows one to decode sequences of activity in the antennal lobe. We are thus able to clarify the role of the existing connections as well as to demonstrate a novel mechanism for decoding temporal information in neuronal systems. This mechanism complements the variety of existing temporal decoding schemes. It seems that neuronal systems not only have a rich variety of code types but also quite a diversity of algorithms for transforming different codes into each other.

}, keywords = {bodies, body, discrimination, hippocampus, Kenyon cells, locust antennal lobe, mushroom, mushroom body, network, neural assemblies, odor, odor representation, olfaction, oscillations, primary visual-cortex, representations, sparse coding, temporal coding}, isbn = {0929-5313}, doi = {10.1023/a:1025825111088}, url = {We discuss a biophysical model of synaptic plasticity that provides a unified view of the outcomes of synaptic modification protocols, including: (1) prescribed time courses of postsynaptic intracellular Ca2+ release, (2) postsynaptic voltage clamping with presentation of presynaptic spike trains at various frequencies, (3) direct postsynaptic response to presynaptic spike trains at various frequencies, and (4) LTP/LTD as a response to precisely timed presynaptic and postsynaptic spikes. Our model has a Hodgkin-Huxley conductance-based neuron with AMPA and NMDA channels and voltage-gated calcium channels in addition to the usual currents. The time course of intracellular concentration of Ca2+ is determined by fluxes from these three sources and drives a competition of kinase and phosphatase pathways. Our critical assumption is a phenomenological form for the competition of kinase and phosphatase activity leading to changes in synaptic strength. This is examined in the context of experiments that induce plasticity by programmed postsynaptic intracellular Ca2+ release. It is successful in describing such experiments. This connection is used in conjunction with a biophysical model of postsynaptic membrane voltage and Ca2+ fluxes to show that the features of the other protocols are consequences of the model. We then use the model to predict the outcome of new experiments: (a) LTP/LTD spike timing plasticity in the presence of varying extracellular concentrations of Mg2+, (b) the response of synaptic strength to the presentation of concurrent presynaptic and postsynaptic spike trains, and (c) spike timing plasticity with two presynaptic (postsynaptic) spikes and one postsynaptic (presynaptic) spike.

}, keywords = {ampa, channels, dendritic spines, induction, long-term potentiation, ltp, memory, nmda receptors, relay neurons, spike}, isbn = {0340-1200}, doi = {10.1007/s00422-003-0422-x}, url = {Synchronization of neural activity is fundamental for many functions of the brain. We demonstrate that spike-timing dependent plasticity (STDP) enhances synchronization ( entrainment) in a hybrid circuit composed of a spike generator, a dynamic clamp emulating an excitatory plastic synapse, and a chemically isolated neuron from the Aplysia abdominal ganglion. Fixed-phase entrainment of the Aplysia neuron to the spike generator is possible for a much wider range of frequency ratios and is more precise and more robust with the plastic synapse than with a nonplastic synapse of comparable strength. Further analysis in a computational model of Hodgkin - Huxleytype neurons reveals the mechanism behind this significant enhancement in synchronization. The experimentally observed STDP plasticity curve appears to be designed to adjust synaptic strength to a value suitable for stable entrainment of the postsynaptic neuron. One functional role of STDP might therefore be to facilitate synchronization or entrainment of nonidentical neurons.

}, keywords = {coincidence, conductances, depression, dynamic clamp, entrainment, hebbian plasticity, hybrid circuit, learning/, model, neuronal control, neurons, oscillations, spike-timing dependent plasticity, synaptic plasticity, synchronization}, isbn = {0270-6474}, url = {Using a modified version of a phenomenological model for the dynamics of synaptic plasticity, we examine some recent experiments of Wu et al. [(2001) J Physiol 533:745-755]. We show that the model is quantitatively consistent with their experimental protocols producing long-term potentiation (LTP) and long-term depression (LTD) in slice preparations of rat hippocampus. We also predict the outcome of similar experiments using different frequencies and depolarization levels than reported in their results.

}, isbn = {0340-1200}, doi = {10.1007/s00422-002-0376-4}, url = {The pyloric network of the lobster stomatogastric nervous system is one of the best described assemblies of oscillatory neurons producing bursts of action potentials. While the temporal patterns of bursts have been investigated in detail, those of spikes have received less attention. Here we analyze the intraburst firing patterns of pyloric neurons and the synaptic interactions shaping their dynamics in millisecond time scales not performed before. We find that different pyloric neurons express characteristic, cell-specific firing patterns in their bursts. Nonlinear analysis of the interspike intervals (ISIs) reveals distinctive temporal structures ({\textquoteright}interspike interval signatures{\textquoteright}), which are found to depend on the synaptic connectivity of the network. We compare ISI patterns of the pyloric dilator (PD), lateral pyloric (LP), and ventricular dilator (VD) neurons in 1) normal conditions, 2) after blocking glutamatergic synaptic connections, and 3) in various functional configurations of the three neurons. Manipulation of the synaptic connectivity results in characteristic changes in the ISI signatures of the postsynaptic neurons. The intraburst firing pattern of the PD neuron is regularized by the inhibitory synaptic connection from the LP neuron as revealed in current-clamp experiments and also as reconstructed with a dynamic clamp. On the other hand, mutual inhibition between the LP and VD neurons tend to produce more irregular bursts with increased spike jitter. The results show that synaptic interactions tine-tune the output of pyloric neurons. The present data also suggest a way of processing of synaptic information: bursting neurons are capable of encoding incoming signals by altering the tine structure of their intraburst spike patterns.

}, keywords = {dynamic clamp, firing patterns, lobster stomatogastric ganglion, motor, neural-network, pacemaker neurons, panulirus-interruptus, patterns, single neurons, Spike trains, transmission}, isbn = {0022-3077}, doi = {10.1152/jn.00732.2002}, url = {We suggest a mechanism based on spike-timing-dependent plasticity (STDP) of synapses to store, retrieve and predict temporal sequences. The mechanism is demonstrated in a model system of simplified integrate-and-fire type neurons densely connected by STDP synapses. All synapses are modified according to the so-called normal STDP rule observed in various real biological synapses. After conditioning through repeated input of a limited number of temporal sequences, the system is able to complete the temporal sequence upon receiving the input of a fraction of them. This is an example of effective unsupervised learning in a biologically realistic system. We investigate the dependence of learning success on entrainment time, system size, and presence of noise. Possible applications include learning of motor sequences, recognition and prediction of temporal sensory information in the visual as well as the auditory system, and late processing in the olfactory system of insects.

}, keywords = {ca2+, dynamic, induction, long-term potentiation, ltp, memory sequences, model, networks, objects, place cells, synaptic modification}, isbn = {1539-3755}, doi = {10.1103/PhysRevE.68.011908}, url = {We introduce a new method for asymmetric (public key/private key) encryption exploiting properties of nonlinear dynamical systems. A high-dimensional dissipative nonlinear dynamical system is distributed between transmitter and receiver, so we call the method distributed dynamics encryption (DDE). The transmitter dynamics is public, and the receiver dynamics is hidden. A message is encoded by modulation of parameters of the transmitter, and this results in a shift of the overall system attractor. An unauthorized receiver does not know the hidden dynamics in the receiver and cannot decode the message. We present an example of DDE using a coupled map lattice.

}, keywords = {chaos}, isbn = {0031-9007}, doi = {10.1103/PhysRevLett.90.047903}, url = {We study the synchronization of two model neurons coupled through a synapse having an activity-dependent strength. Our synapse follows the rules of spike-timing dependent plasticity. We show that this plasticity of the coupling between neurons produces enlarged frequency-locking zones and results in synchronization that is more rapid and much more robust against noise than classical synchronization arising from connections with constant strength. We also present a simple discrete map model that demonstrates the generality of the phenomenon.

}, keywords = {neurons}, isbn = {1539-3755}, doi = {10.1103/PhysRevE.67.021901}, url = {Using the dynamic clamp technique, we investigated the effects of varying the time constant of mutual synaptic inhibition on the synchronization of bursting biological neurons. For this purpose, we constructed artificial half-center circuits by inserting simulated reciprocal inhibitory synapses between identified neurons of the pyloric circuit in the lobster stomatogastric ganglion. With natural synaptic interactions blocked (but modulatory inputs retained), these neurons generated independent, repetitive bursts of spikes with cycle period durations of similar to1 s. After coupling the neurons with simulated reciprocal inhibition, we selectively varied the time constant governing the rate of synaptic activation and deactivation. At time constants less than or equal to100 ms, bursting was coordinated in an alternating (anti-phase) rhythm. At longer time constants (\>400 ms), bursts became phase-locked in a fully overlapping pattern with little or no phase lag and a shorter period. During the in-phase bursting, the higher-frequency spiking activity was not synchronized. If the circuit lacked a robust periodic burster, increasing the time constant evoked a sharp transition from out-of-phase oscillations to in-phase oscillations with associated intermittent phase-jumping. When a coupled periodic burster neuron was present (on one side of the half-center circuit), the transition was more gradual. We conclude that the magnitude and stability of phase differences between mutually inhibitory neurons varies with the ratio of burst cycle period duration to synaptic time constant and that cellular bursting (whether periodic or irregular) can adopt in-phase coordination when inhibitory synaptic currents are sufficiently slow.

}, keywords = {dynamic clamp, identified neurons, interneuronal, lobster stomatogastric ganglion, networks, panulirus-interruptus, pre-botzinger complex, pyloric neurons, reticular nucleus, selective inactivation, thalamic, underlying pattern generation}, isbn = {0022-3077}, doi = {10.1152/jn.00784.2001}, url = {We address the question of bounds on the synchronization error for the case of nearly identical nonlinear systems. It is pointed out that negative largest conditional Lyapunov exponents of the synchronization manifold are not sufficient to guarantee a small synchronization error and that one has to find bounds for the deformation of the manifold due to perturbations. We present an analytic bound for a simple subclass of systems, which includes the Lur{\textquoteright}e systems, showing that the bound for the deformation grows as the largest singular value of the linearized system gets larger. Then, the Lorenz system is taken as an example to demonstrate that the phenomenon is not restricted to Lur{\textquoteright}e systems.

}, keywords = {chaotic systems, COMMUNICATION, invariant-manifolds}, isbn = {1539-3755}, doi = {10.1103/PhysRevE.66.036229}, url = {The role of synaptic dynamics in processing neural information is investigated in a neural information channel with realistic model neurons having chaotic intrinsic dynamics. Our neuron models are realized in simple analogue circuits, and Our synaptic connections are realized both in analogue circuits and through a dynamic clamp program. The information which is input to the first chaotic neuron in the channel emerges partially absent and partially {\textquoteright}hidden{\textquoteright}. Part is absent because of the dynamical effects of the chaotic oscillation that effectively acts as a noisy channel. The {\textquoteright}hidden{\textquoteright} part is recoverable. We show that synaptic parameters, most significantly receptor binding time constants, can be tuned to enhance the information transmission by the combination of a neuron plus a synapse. We discuss how the dynamics of the synapse can be used to recover {\textquoteright}hidden{\textquoteright} information using average mutual information as a measure of the quality of information transport.

}, keywords = {clamp, connections, entropy, model, neurons, synapses, synchronous behavior, transmission}, isbn = {0954-898X}, doi = {10.1088/0954-898x/13/4/304}, url = {We have built several networks of inferior olive (IO) model neurons to study the emerging spatio-temporal patterns of neuronal activity. The degree and extent of the electrical coupling, and the presence of stimuli were the main factors considered in the IO networks. The network activity was analyzed using a discrete wavelet transform which provides a quantitative characterization of the spatio-temporal patterns. This study reveals the ability of these networks to generate characteristic spatio-temporal patterns which can be essential for the function of the IO. (C) 2002 Elsevier Science B.V. All rights reserved.

}, isbn = {0925-2312}, doi = {10.1016/s0925-2312(02)00458-7}, url = {Semiconductor lasers provide an excellent opportunity for communication using chaotic waveforms. We discuss the characteristics and the synchronization of two semiconductor lasers with optoelectronic feedback. The systems exhibit broadband chaotic intensity oscillations whose dynamical dimension generally increases with the time delay in the feedback loop. We explore the robustness of this synchronization with parameter mismatch in the lasers, with mismatch in the optoelectronic feedback delay, and with the strength of the coupling between the systems. Synchronization is robust to mismatches between the intrinsic parameters of the lasers, but it is sensitive to mismatches of the time delay in the transmitter and receiver feedback loops. An open-loop receiver configuration Is suggested, eliminating feedback delay mismatch issues. Communication strategies for arbitrary amplitude of modulation onto the chaotic signals are discussed, and the bit-error rate for one such scheme is evaluated as a function of noise in the optical channel.

}, keywords = {chaos, communication system nonlinearities, dynamics, generalized synchronization, optical communication, optoelectronic devices, synchronization}, isbn = {0018-9197}, doi = {10.1109/3.952542}, url = {Locust antennal lobe (AL) projection neurons (PNs) respond to olfactory stimuli with sequences of depolarizing and hyperpolarizing epochs, each lasting hundreds of milliseconds. A computer simulation of an AL network was used to test the hypothesis that slow inhibitory connections between local neurons (LNs) and PNs are responsible for temporal patterning. Activation of slow inhibitory receptors on PNs by the same GABAergic synapses that underlie fast oscillatory synchronization of PNs was sufficient to shape slow response modulations. This slow stimulus- and neuron-specific patterning of AL activity was resistant to blockade of fast inhibition. Fast and slow inhibitory mechanisms at synapses between LNs and PNs can thus form dynamical PN assemblies whose elements synchronize transiently and oscillate collectively, as observed not only in the locust AL, but also in the vertebrate olfactory bulb.

}, keywords = {computation, information, memory, neurons, olfactory-bulb, oscillating neural assemblies, pheromone, representation, responses, stimulation}, isbn = {0896-6273}, doi = {10.1016/s0896-6273(01)00286-0}, url = {Transient pairwise synchronization of locust antennal lobe (AL) projection neurons (PNs) occurs during odor responses. In a Hodgkin-Huxley-type model of the AL, interactions between excitatory PNs and inhibitory local neurons (LNs) created coherent network oscillations during odor stimulation. GABAergic interconnections between LNs led to competition among them such that different groups of LNs oscillated with periodic Ca2+ spikes during different 50-250 ms temporal epochs, similar to those recorded in vivo. During these epochs, LN-evoked IPSPs caused phase-locked, population oscillations in sets of postsynaptic PNs. The model shows how alternations of the inhibitory drive can temporally encode sensory information in networks of neurons without precisely tuned intrinsic oscillatory properties.

}, keywords = {encoding neural assemblies, information, interneurons, mechanisms, network, odor, olfactory-bulb, patterns, stimulation, thalamic reticular nucleus}, isbn = {0896-6273}, doi = {10.1016/s0896-6273(01)00284-7}, url = {An essential question raised after the observation of highly variable bursting activity in individual neurons of Central Pattern Generators (CPGs) is how an assembly of such cells can cooperatively act to produce regular signals to motor systems. It is well known that some neurons in the lobster stomatogastric ganglion have a highly irregular spiking-bursting behavior when they are synaptically isolated from any connection in the CPG. Experimental recordings show that periodic stimuli on a single neuron can regulate its firing activity. Other evidence demonstrates that specific chemical and/or electrical synapses among neurons also induce the regularization of the rhythms. In this paper we present a modeling study in which a slow subcellular dynamics, the exchange of calcium between an intracellular store and the cytoplasm, is responsible for the origin and control of the irregular spiking-bursting activity. We show this in simulations of single cells under periodic driving and in minimal networks where the cooperative activity can induce regularization. While often neglected in the description of realistic neuron models, subcellular processes with slow dynamics may play an important role in information processing and short-term memory of spiking-bursting neurons. (C) 2001 Elsevier Science Ltd. All rights reserved.

}, keywords = {biological neurons, Calcium, calcium oscillations, chaos, CPGs, dynamic clamp, oscillations, regularization, spiking-bursting neurons, subcellular dynamics}, isbn = {0893-6080}, doi = {10.1016/s0893-6080(01)00046-6}, url = {The dynamic clamp protocol allows an experimenter to simulate the presence of membrane conductances in, and synaptic connections between, biological neurons. Existing protocols and commercial ADC/DAC boards provide ready control in and between less than or equal to2 neurons. Control at \>2 sites is desirable when studying neural circuits with serial or ring connectivity. Here, we describe how to extend dynamic clamp control to four neurons and their associated synaptic interactions, using a single IBM-compatible PC, an ADC/DAC interface with two analog outputs, and an additional demultiplexing circuit. A specific C++ program, DYNCLAMP4, implements these procedures in a Windows environment, allowing one to change parameters while the dynamic clamp is running. Computational efficiency is increased by varying the duration of the input-output cycle. The program simulates less than or equal to8 Hodgkin-Huxley-type conductances and less than or equal to 18 (chemical and/or electrical) synapses in less than or equal to4 neurons and runs at a minimum update rate of 5 kHz on a 450 MHz CPU. (Increased speed is possible in a two-neuron version that does not need auxiliary circuitry). Using identified neurons of the crustacean stomatogastric ganglion, we illustrate on-line parameter modification and the construction of three-member synaptic rings. (C) 2001 Elsevier Science B.V. All rights reserved.

}, keywords = {artificial, artificial synapses, central pattern generator, conductance, conductances, dynamic clamp, ELECTRONIC, Hodgkin-Huxley, identified neurons, in-vitro, lobster stomatogastric ganglion, medicinal leech, neural circuits, neural networks, neurons, pyloric network, selective inactivation, simulation, synaptic rings}, isbn = {0165-0270}, doi = {10.1016/s0165-0270(01)00368-5}, url = {The synchronization of chaotic rare-earth-doped fiber ring lasers is analyzed. The lasers are first coupled by transmitting a fraction c of the circulating electric field in the transmitter and injecting it into the optical cavity of the receiver. A coupling strategy which relies on modulation of the intensity of the light alone is also examined. Synchronization is studied as a function of the coupling strength, and we see excellent synchronization, even with very small c. We prove that in an open loop configuration (c=1) synchronization is guaranteed due to the particular structure of our equations and of the injection method we use. The generalized synchronization of these model lasers is examined when there is parameter mismatch between the transmitter and receiver lasers. The synchronization is found to be insensitive to a wide range of mismatch in laser parameters, but it is sensitive to other parameters, in particular those associated with the phase and the polarization of the circulating electric field. Communicating information between the transmitter and receiver lasers is also addressed. We investigate a scheme for modulating information onto the chaotic electric field and then demodulating and detecting the information embedded in the chaotic signal passed down the communications channel. We show full recovery with very low error for a wide range of coupling strengths.

}, keywords = {dynamics, generalized synchronization, systems}, isbn = {1063-651X}, url = {Conductance-based models of neurons from the lobster stomatogastric ganglion (STG) have been developed to understand the observed chaotic behavior of individual STG neurons. These models identify an additional slow dynamical process - calcium exchange and storage in the endoplasmic reticulum - as a biologically plausible source for the observed chaos in the oscillations of these cells. In this paper we test these ideas further by exploring the dynamical behavior when two model neurons are coupled by electrical or gap junction connections. We compare in detail the model results to the laboratory measurements of electrically-coupled neurons that we reported earlier. The experiments on the biological neurons varied the strength of the effective coupling by applying a parallel, artificial synapse, which changed both the magnitude and polarity of the conductance between the neurons. We observed a sequence of bifurcations that took the neurons from strongly synchronized in-phase behavior, through uncorrelated chaotic oscillations to strongly synchronized - and now regular - out-of-phase behavior. The model calculations reproduce these observations quantitatively, indicating that slow subcellular processes could account for the mechanisms involved in the synchronization and regularization of the otherwise individual chaotic activities.

}, keywords = {cerebellar purkinje-cells, inositol, localization, receptor, ryanodine, stores}, isbn = {0340-1200}, doi = {10.1007/s004220000198}, url = {Following studies of olfactory processing in insects and fish, we investigate neural networks whose dynamics in phase space is represented by orbits near the heteroclinic connections between saddle regions (fixed points or limit cycles). These networks encode input information as trajectories along the heteroclinic connections. If there are N neurons in the network, the capacity is approximately e(N - 1)!, i.e., much larger than that of most traditional network structures. We show that a small winnerless competition network composed of FitzHugh-Nagumo spiking neurons efficiently transforms input information into a spatiotemporal output.

}, keywords = {systems}, isbn = {0031-9007}, doi = {10.1103/PhysRevLett.87.068102}, url = {The use of methods from contemporary nonlinear dynamics in studying neurobiology has been rather limited. Yet, nonlinear dynamics has become a practical tool for analyzing data and verifying models. This has led to productive coupling of nonlinear dynamics with experiments in neurobiology in which the neural circuits are forced with constant stimuli, with slowly varying stimuli, with periodic stimuli, and with more complex information-bearing stimuli. Analysis of these more complex stimuli of neural circuits goes to the heart of how one is to understand the encoding and transmission of information by nervous systems.

}, keywords = {low-dimensional dynamics, models, neurons, noise, sensory biology}, isbn = {0959-4388}, doi = {10.1016/s0959-4388(00)00229-4}, url = {In the analysis of time series from nonlinear sources, mutual information (MI) is used as a nonlinear statistical criterion for the selection of an appropriate time delay in time delay reconstruction of the state space. MI is a statistic over the sets of sequences associated with the dynamical source, and we examine here the distribution of MI, thus going beyond the familiar analysis of its average alone. We give for the first time the distribution of MI for a standard, classical communications channel with Gaussian, additive white noise. For time series analysis of a dynamical system, we show how to determine the distribution of MI and discuss the implications for the use of average mutual information (AMI) in selecting time delays in phase space reconstruction. (C) 2001 Elsevier Science B.V. All rights reserved.

}, isbn = {0375-9601}, doi = {10.1016/s0375-9601(01)00128-1}, url = {Periodic current forcing was used to investigate the intrinsic dynamics of a small group of electrically coupled neurons in the pyloric central pattern generator (CPG) of the lobster. This group contains three neurons, namely the two pyloric dilator (PD) motoneurons and the anterior burster (AB) interneuron. Intracellular current injection, using sinusoidal waveforms of varying amplitude and frequency, was applied in three configurations of the pacemaker neurons: 1) the complete pacemaker group, 2) the two PDs without the AB, and 3) the AB neuron isolated from the PDs. Depending on the frequency and amplitude of the injected current, the intact pacemaker group exhibited a wide variety of nonlinear behaviors, including synchronization to the forcing, quasiperiodicity, and complex dynamics. In contrast, a single, broad 1:1 entrainment zone characterized the response of the PD neurons when isolated from the main pacemaker neuron AB. The isolated AB responded to periodic forcing in a manner similar to the complete pacemaker group, but with wider zones of synchronization. We have built an analog electronic circuit as an implementation of a modified Hindmarsh-Rose model for simulating the membrane potential activity of pyloric neurons. We subjected this electronic model neuron to the same periodic forcing as used in the biological experiments. This four-dimensional electronic model neuron reproduced the autonomous oscillatory firing patterns of biological pyloric pacemaker neurons, and it expressed the same stationary nonlinear responses to periodic forcing as its biological counterparts. This adds to our confidence in the model. These results strongly support the idea that the intact pyloric pacemaker group acts as a uniform low-dimensional deterministic nonlinear oscillator, and the regular pyloric oscillation is the outcome of cooperative behavior of strongly coupled neurons, having different dynamical and biophysical properties when isolated.

}, keywords = {dynamics, electronic neurons, graded synaptic transmission, lobster stomatogastric ganglion, network, panulirus-interruptus, patterns, responses, spike, synchronous behavior, trains}, isbn = {0022-3077}, url = {The pyloric Central Pattern Generator (CPG) in the lobster has an architecture in which every neuron receives at least one connection from another member of the CPG. We call this a {\textquotedblleft}non-open{\textquotedblright} network topology. An {\textquotedblleft}open{\textquotedblright} topology, where at least one neuron does not receive synapses from any other CPG member, is found neither in the pyloric nor in the gastric mill CPG. Here we investigate a possible reason for this topological structure using the ability to perform a biologically functional task as a measure of the efficacy of the network. When the CPG is composed of model neurons that exhibit regular membrane voltage oscillations, open topologies are as able to maximize this functionality as non-open topologies. When we replace these models by neurons which exhibit chaotic membrane voltage oscillations, the functional criterion selects non-open topologies. As isolated neurons from invertebrate CPGs are known in some cases to undergo chaotic oscillations, this suggests that there is a biological basis for the class of non-open network topologies that we observe.

}, isbn = {0340-1200}, doi = {10.1007/PL00007976}, url = {http://dx.doi.org/10.1007/PL00007976}, author = {Huerta, R. and Varona, P. and Rabinovich, M. I. and Abarbanel, Henry D. I.} } @article {37162, title = {Odor encoding as an active, dynamical process: Experiments, computation, and theory}, journal = {Annual Review of Neuroscience}, volume = {24}, year = {2001}, note = {n/a}, pages = {263-297}, type = {Review}, abstract = {We examine early olfactory processing in the vertebrate and insect olfactory systems, using a computational perspective. What transformations occur between the first and second olfactory processing stages? What are the causes and consequences of these transformations? To answer these questions, we focus on the functions of olfactory circuit structure and on the role of time in odor-evoked integrative processes. We argue that early olfactory relays are active and dynamical networks, whose actions change the format of odor-related information in very specific ways, so as to refine stimulus identification. Finally, we introduce a new theoretical framework ("winnerless competition") for the interpretation of these data.

}, keywords = {antennal lobe, assemblies, dynamical systems, functional-organization, learning, mitral tufted cells, mushroom bodies, neuronal-activity, olfaction, olfactory bulb, organization, oscillating neural, oscillations, periplaneta-americana, rabbit olfactory-bulb, synaptic, temporal visual-cortex}, isbn = {0147-006X}, doi = {10.1146/annurev.neuro.24.1.263}, url = {Based on experiments with the locust olfactory system, we demonstrate that model sensory neural networks with lateral inhibition can generate stimulus specific identity-temporal patterns in the form of stimulus-dependent switching among small and dynamically changing neural ensembles (each ensemble being a group of synchronized projection neurons). Networks produce this switching mode of dynamical activity when lateral inhibitory connections are strongly non-symmetric. Such coding uses {\textquoteright}winner-less competitive{\textquoteright} (WLC) dynamics. In contrast to the well known winner-take-all competitive (WTA) networks and Hopfield nets, winner-less competition represents sensory information dynamically. Such dynamics are reproducible, robust against intrinsic noise and sensitive to changes in the sensory input. We demonstrate the validity of sensory coding with WLC networks using two different formulations of the dynamics, namely the average and spiking dynamics of projection neurons (PN). (C) 2000 Elsevier Science Ltd. Published by Editions scientifiques et medicales Elsevier SAS.

}, keywords = {coding, competitive dynamics, neural networks, olfaction, olfactory network, Sensors, systems}, isbn = {0928-4257}, doi = {10.1016/s0928-4257(00)01092-5}, url = {Central pattern generating neurons from the lobster stomatogastric ganglion were analyzed using new nonlinear methods. The LP neuron was found to have only four or five degrees of freedom in the isolated condition and displayed chaotic behavior. We show that this chaotic behavior could be regularized by periodic pulses of negative current injected into the neuron or by coupling it to another neuron via inhibitory connections. We used both a modified Hindmarsh-Rose model to simulate the neurons behavior phenomenologically and a more realistic conductance-based model so that the modeling could be linked to the experimental observations. Both models were able to capture the dynamics of the neuron behavior better than previous models. We used the Hindmarsh-Rose model as the basis for building electronic neurons which could then be integrated into the biological circuitry. Such neurons were able to rescue patterns which had been disabled by removing key biological neurons from the circuit. (C) 2000 Elsevier Science Ltd. Published by Editions scientifiques et medicales Elsevier SAS.

}, keywords = {behavior, central pattern generator, chaotic behavior, model, neural modeling, neurons, nonlinear systems electronic, stomatogastric ganglion}, isbn = {0928-4257}, doi = {10.1016/s0928-4257(00)01101-3}, url = {Biological neural communications channels transport environmental information from sensors through chains of active dynamical neurons to neural centers for decisions and actions to achieve required functions. These kinds of communications channels are able to create information and to transfer information from one time scale to the other because of the intrinsic nonlinear dynamics of the component neurons. We discuss a very simple neural information channel composed of sensory input in the form of a spike train that arrives at a model neuron, then moves through a realistic synapse to a second neuron where the information in the initial sensory signal is read. Our model neurons are four-dimensional generalizations of the Hindmarsh-Rose neuron, and we use a model of chemical synapse derived from first-order kinetics. The four-dimensional model neuron has a rich variety of dynamical behaviors, including periodic bursting, chaotic bursting, continuous spiking, and multistability. We show that, for many of these regimes, the parameters of the chemical synapse can be tuned so that information about the stimulus that is unreadable at the first neuron in the channel can be recovered by the dynamical activity of the synapse and the second neuron. Information creation by nonlinear dynamical systems that allow chaotic oscillations is familiar in their autonomous oscillations. It is associated with the instabilities that lead to positive Lyapunov exponents in their dynamical behavior. Our results indicate how nonlinear neurons acting as input/output systems along a communications channel can recover information apparently "lost" in earlier junctions on the channel. Our measure of information transmission is the average mutual information between elements, and because the channel is active and nonlinear, the average mutual information between the sensory source and the final neuron may be greater than the average mutual information at an earlier neuron in the channel. This behavior is strikingly different than the passive role communications channels usually play, and the "data processing theorem" of conventional communications theory is violated by these neural channels. Our calculations indicate that neurons can reinforce reliable transmission along a chain even when the synapses and the neurons are not completely reliable components. This phenomenon is generic in parameter space, robust in the presence of noise, and independent of the discretization process. Our results suggest a framework in which one might understand the apparent design complexity of neural information transduction networks. If networks with many dynamical neurons can recover information not apparent at various way stations in the communications channel, such networks may be more robust to noisy signals, may be more capable of communicating many types of encoded sensory neural information, and may be the appropriate design for components, neurons and synapses, which can be individually imprecise, inaccurate "devices.".

}, keywords = {entropy, excitable system, model, mutual information, neurons, resonance}, isbn = {1063-651X}, doi = {10.1103/PhysRevE.62.7111}, url = {There is a substantial body of experimental evidence that neurons often produce oscillations to achieve their functional goals. They thus behave as dynamical systems despite the fluctuations we observe due to environmental noise and imperfections in their construction. In observations of neural behavior these oscillations can appear "intrinsic" as in the rhythmical pulsing of Central Pattern Generators (CPGs) or the oscillations can arise in response to a stimulus as in the actions of projection neurons in olfactory operation or even in the dynamical response of cortex neurons. When assemblies of neurons perform oscillations, their collective behavior is determined in an essential way by both the nonlinear dynamics of the individuals in the assembly and by the architecture of the neural circuitry. The neurons inside an assembly can synchronize, possibly with an evident time lag, to produce particular patterns which control the rhythmic muscular activity of an animal, as in CPG operation. The component neurons may compete with each other in a dynamical fashion to solve the problem of pattern recognition, as in the cortex. They may collectively produce rich spatiotemporal behavior in response to specific forms of stimulus from sensory systems. This is a review of these phenomena which are discussed in language familiar in the description of nonlinear dynamical systems including synchronization and competition. We illustrate our ideas with data from experiment and model simulations for both individual neurons and assemblies of neurons.

}, keywords = {behavior, cerebellar purkinje-cells, chaotic neurons, lamprey, model, network, oscillations, receptor, synchronization, systems}, isbn = {0218-1274}, doi = {10.1142/s0218127400000669}, url = {Chaotic bursting has been recorded in synaptically isolated neurons of the pyloric central pattern generating (CPG) circuit in the lobster stomatogastric ganglion. Conductance-based models of pyloric neurons typically fail to reproduce the observed irregular behavior in either voltage time series or state-space trajectories. Recent suggestions of Chay [Biol Cybern 75, 419-431] indicate that chaotic bursting patterns can be generated by model neurons that couple membrane currents to the nonlinear dynamics of intracellular calcium storage and release. Accordingly, we have built a two-compartment model of a pyloric CPG neuron incorporating previously described membrane conductances together with intracellular Ca2+ dynamics involving the endoplasmic reticulum and the inositol 1,4,5-trisphosphate receptor IP3R. As judged by qualitative inspection and quantitative, nonlinear analysis, the irregular voltage oscillations of the model neuron resemble those seen in the biological neurons. Chaotic bursting arises from the interaction of fast membrane voltage dynamics with slower intracellular Ca2+ dynamics and? hence, depends on the concentration of IP3. Despite the presence of 12 independent dynamical variables, the model neuron bursts chaotically in a subspace characterized by 3-4 active degrees of freedom. The critical aspect of this model is that chaotic oscillations arise when membrane voltage processes are coupled to another slow dynamic. Here we suggest this slow dynamic to be intracellular Ca2+ handling.

}, keywords = {cerebellar purkinje-cells, crab stomatogastric ganglion, excitable cell, inositol, receptor, stores, trisphosphate}, isbn = {0340-1200}, doi = {10.1007/s004220050604}, url = {Small assemblies of neurons such as central pattern generators (CPG) are known to express regular oscillatory firing patterns comprising bursts of action potentials. In contrast, individual CPG neurons isolated from the remainder of the network can generate irregular firing patterns. In our study of cooperative behavior in CPGs we developed an analog electronic neuron (EN) that reproduces firing patterns observed in lobster pyloric CPG neurons. Using a tuneable artificial synapse we connected the EN bidirectionally to neurons of this CPG. We found that the periodic bursting oscillation of this mixed assembly depends on the strength and sign of the electrical coupling. Working with identified, isolated pyloric CPG neurons whose network rhythms were impaired, the EN/biological network restored the characteristic CPG rhythm both when the EN oscillations are regular and when they are irregular. NeuroReport 11:563-569 (C) 2000 Lippincon Williams \& Wilkins.

}, keywords = {chaos, electronic neuron, electronic synapse, lobster stomatogastric ganglion, network, neural systems, neurons, oscillatory rhythm, pattern generation, pyloric, regularization, synchronization}, isbn = {0959-4965}, url = {We observed the arising of a new slow regular rhythm along the chain of unidirectional coupled neurons whose individual dynamics is periodic spiking, In this study we use the Hindmarsh-Rose type neurons which are potentially able to produce several modes of behavior: periodic spiking, periodic spiking-bursting and chaotic spiking-bursting activities. Several spatial bifurcations take place along the chain: the bifurcation from periodic spiking regime to chaotic spiking-bursting, transformation corresponding to the developing chaos, and finally, the transition from a irregular spiking-bursting regime to a regime with regular bursts, The calculation of the Kolmogorov-Sinai entropy indicates that the periodic oscillations of some neurons at the beginning of the chain are transformed into spiking-bursting chaos that is localized along the network becoming later regular slow oscillations in spite of the chaoticity of the spiking pulsations. (C) 2000 Published by Elsevier Science B.V. All rights reserved.

}, keywords = {lattices, neurons}, isbn = {0375-9601}, doi = {10.1016/s0375-9601(00)00015-3}, url = {We report on experimental studies of synchronization phenomena in a pair of analog electronic neurons (ENs). The ENs were designed to reproduce the observed membrane voltage oscillations of isolated biological neurons from the stomatogastric ganglion of the California spiny lobster Panulirus interruptus. The ENs are simple analog circuits which integrate four-dimensional differential equations representing fast and slow subcellular mechanisms that produce the characteristic regular/chaotic spiking-bursting behavior of these cells. In this paper we study their dynamical behavior as we couple them in the same configurations as we have done for their counterpart biological neurons. The interconnections we use for these neural oscillators are both direct electrical connections and excitatory and inhibitory chemical connections: each realized by analog circuitry and suggested by biological examples. We provide here quantitative evidence that the ENs and the biological neurons behave similarly when coupled in the same manner. They each display well defined bifurcations in their mutual synchronization and regularization. We report briefly on an experiment on coupled biological neurons and four-dimensional ENs, which provides further ground for testing the validity of our numerical and electronic models of individual neural behavior. Our experiments as a whole present interesting new examples of regularization and synchronization in coupled nonlinear oscillators.

}, keywords = {chaos, dynamics, model, systems}, isbn = {1063-651X}, doi = {10.1103/PhysRevE.62.2644}, url = {In this letter we investigate a communication strategy for digital ultra-wide bandwidth impulse radio, where the separation between the adjacent pulses is chaotic arising from a dynamical system with irregular behavior. A pulse position method is used to modulate binary information onto the carrier. The receiver is synchronized to the chaotic pulse train, thus providing the time reference for information extraction. We characterize the performance of this scheme in terms of error probability versus E-b/N-o by numerically simulating its operation in the presence of noise and filtering.

}, keywords = {chaos, impulse radio, spread spectrum}, isbn = {1089-7798}, doi = {10.1109/4234.841319}, url = {Chaotically oscillating rare-earth-doped fiber ring lasers (DFRLs) may provide an attractive way to exploit the broad bandwidth available in an optical communications system. Recent theoretical and experimental investigations have successfully shown techniques to modulate information onto the wide-band chaotic oscillations, transmit that signal along an optical fiber, and demodulate the information at the receiver. We develop a theoretical model of a DFRL and discuss an efficient numerical simulation which includes intrinsic linear and nonlinear induced birefringence, both transverse polarizations, group velocity dispersion, and a finite gain bandwidth. We analyze first a configuration with a single loop of optical fiber containing the doped fiber amplifier, and then, as suggested by Roy and VanWiggeren, we investigate a system with two rings of optical fiber-one made of passive fiber alone. The typical round-trip time for the passive optical ring connecting the erbium-doped amplifier to itself is 200 ns, so approximate to 10(5) round-trips are required to see the slow effects of the population inversion dynamics in this laser system. Over this large number of round-trips, physical effects like GVD and the Ken nonlinearity, which may appear small at our frequencies and laser powers via conventional estimates, may accumulate and dominate the dynamics. We demonstrate from our model that chaotic oscillations of the ring laser with parameters relevant to erbium-doped fibers arises from the nonlinear Ken effect and not from interplay between the atomic population inversion and radiation dynamics. [S1050-2947(99)08607-2].

}, keywords = {cavity, generation, polarization self-modulation, semiconductor-lasers, transmitted light}, isbn = {1050-2947}, doi = {10.1103/PhysRevA.60.2360}, url = {In the oscillatory circuits known as central pattern generators (CPGs), most synaptic connections are inhibitory. We have assessed the effects of inhibitory synaptic input on the dynamic behavior of a component neuron of the pyloric CPG in the lobster,stomatogastric ganglion. Experimental perturbations were applied to the single, lateral pyloric neuron (LP). and the resulting voltage rime series were analyzed using an entropy measure obtained from power spectra. When isolated from phasic inhibitory input. LP generates irregular spiking-bursting activity. Each burst begins in a relatively stereotyped manner but then evolves with exponentially increasing variability. Periodic, depolarizing current pulses are poor regulators of this activity. whereas hyperpolarizing pulses exert a strong, frequency-dependent regularizing action. Rhythmic inhibitory inputs from presynaptic pacemaker neurons also regularize the bursting. These inputs 1) reset LP to a similar state at each cycle, 2) extend and further stabilize the initial, quasi-stable phase of its bursts, and 3) at sufficiently high frequencies terminate ongoing bursts before they become unstable. The dynamic time frame for stabilization overlaps the normal frequency range of oscillations of the pyloric CPG. Thus. in this oscillatory circuit, the interaction of rhythmic inhibitory input with intrinsic burst properties affects not only the phasing. but also the dynamic stability of neural activity.

}, keywords = {cortex, lobster stomatogastric ganglion, locomotion, pattern generation, principles, pyloric network}, isbn = {0022-3077}, url = {We have explored the role of calcium concentration dynamics in the generation of chaos and in the regularization of the bursting oscillations using a minimal neural circuit of two coupled model neurons. In regions of the control parameter space where the slowest component, namely the calcium concentration in the endoplasmic reticulum, weakly depends on the other variables, this model is analogous to three dimensional systems as found in [1] or [2]. These are minimal models that describe the fundamental characteristics of the chaotic spiking-bursting behavior observed in real neurons. We have investigated different regimes of cooperative behavior in large assemblies of such units using lattices of non-identical Hindmarsh-Rose neurons electrically coupled with parameters chosen randomly inside the chaotic region. We study the regularization mechanisms in large assemblies and the development of several spatio-temporal patterns as a function of the interconnectivity among nearest neighbors.

}, keywords = {Calcium, chaotic spiking-bursting neurons, coarse grain dynamics, excitable cell, regularization mechanisms, spatio-temporal patterns}, isbn = {0378-4371}, doi = {10.1016/s0378-4371(98)00528-7}, url = {We describe prediction of ocean water levels between geographically separated locations by using a method derived from studies of chaotic dynamical systems. This interstation predictor requires only previously observed water-level data collected simultaneously from the target and baseline water-level measuring stations. The current observations at the baseline station are then used for making the predictions. The method is demonstrated using data from seven {\textquotedblleft}tide{\textquotedblright} stations with different water level characteristics operated by the U.S. government along the U.S. southeast coast. The data are averaged over 3 min at the sensor to filter out high-frequency motions and are reported at 6-min intervals. Thus the recorded water levels are all ocean surface motions that occur on timescales greater than a few minutes. The predictor forms the reconstructed attractor for both stations using previously observed data. For each new observation at the baseline station, it places the corresponding state-space vector onto the attractor for that station. A map is then derived that associates the neighborhood around that point to the corresponding temporal neighborhood of past observations at the target station. The current observation at the baseline station is then mapped to the appropriate neighborhood for the target station. This is the estimate of the water level at the target station. This method is attractive because the data requirements are simple, the computation burden is low, and there are few decisions about the parameters needed by the algorithm. The state-space predictor compares favorably to traditional methods including statistical correlation, cross-spectral analysis, harmonic analysis, and response analysis. Interstation predictions are important for marine navigation and other applications. The state-space predictor can also provide an objective way of locating tide stations, quantifying the spatial variability of ocean water levels, and identifying regions where ocean water level dynamics are similar.

}, isbn = {0148-0227}, doi = {10.1029/1999JC900025}, url = {The classical problem of characterizing and classifying ocean water levels (all fluctuations that are greater than a few minutes duration) is examined using methods derived from studies of nonlinear dynamical systems. The motivation for this study is the difficulty of characterizing coastal water level dynamics and tide zones with existing methods. There is also long-standing evidence that coastal water levels are not a simple linear superposition of astronomical tides and other influences. Thus it can be appropriate to view water levels as a single, nonlinear, dynamical system. We show that it is appropriate to treat water levels as chaotic by virtue of the existence of a positive Lyapunov exponent for the seven data sets studied. The integer embedding space (the number of state space coordinates) needed to reconstruct an attractor for data collected from sensors exposed to the open ocean is five. Four dynamical degrees of freedom appear to be required to describe the observed dynamics in a state space reconstructed solely from the observations themselves. Water levels in a complex estuary (Chesapeake Bay) have a global dimension of six and have five dynamical degrees of freedom. The largest global Lyapunov exponents, a measure of predictability, vary from 0.57 h(-1) for a station relatively well exposed to the ocean (Charleston, South Carolina) to 4.6 h(-1) for a station well inside a complex estuary (Baltimore, Maryland). The larger values are generally associated with stations that are less predictable, which is consistent with the errors of the astronomical estimator currently used by the U.S. government to generate tide predictions. Lower values are associated with water levels where the estimator errors are smaller. These results are consistent with the interpretation of the Lyapunov exponents as a measure of dynamical predictability. The dynamical characteristics, notably the Lyapunov exponents, are shown to be good candidates for characterizing water level variability and classifying tide zones.

}, keywords = {dimension increase, lyapunov exponents, nonlinear-analysis, signals, systems, time-series, waves}, isbn = {0148-0227}, doi = {10.1029/1998jc900104}, url = {We investigate the ability of oscillating neural circuits to switch between different states of oscillation in two basic neural circuits. We model two quite distinct small neural circuits. The first circuit is based on invertebrate central pattern generator (CPG) studies [A. I. Selverston and M. Moulins, The Crustacean Stomatogastric System (Springer-Verlag, Berlin, 1987)] and is composed of two neurons coupled via both gap junction and inhibitory synapses. The second consists of coupled pairs of interconnected thalamocortical relay and thalamic reticular neurons with both inhibitory and excitatory synaptic coupling. The latter is an elementary unit of the thalamic networks passing sensory information to the cerebral cortex [M. Steriade, D. A. McCormick, and T. J. Sejnowski, Science 262, 679 (1993)]. Both circuits have contradictory coupling between symmetric parts. The thalamocortical model has excitatory and inhibitory connections and the CPG has reciprocal inhibitory and electrical coupling. We describe the dynamics of the individual neurons in these circuits by conductance based ordinary differential equations of Hodgkin-Huxley type [J. Physiol. (London) 117, 500 (1952)]. Both model circuits exhibit bistability and hysteresis in a wide region of coupling strengths. The two main modes of behavior are in-phase and out-of-phase oscillations of the symmetric parts of the network. We investigate the response of these circuits, while they are operating in bistable regimes, to externally imposed excitatory spike trains with varying interspike timing and small amplitude pulses. These are meant to represent spike trains received by the basic circuits from sensory neurons. Circuits operating in a bistable region are sensitive to the frequency of these excitatory inputs, Frequency variations lead to changes from in-phase to out-of-phase coordination or vice versa, The signaling information contained in a spike train driving the network can place the circuit into one or another state depending on the interspike interval and this happens within a few spikes. These states are maintained by the basic circuit after the input signal is ended. When a new signal of the correct frequency enters the circuit, it can be switched to another state with the same case. [S1063-651X(98)13011-8].

}, keywords = {brain, model, oscillations, relay neurons, synchronization, thalamic reticular nucleus, waves}, isbn = {1063-651X}, doi = {10.1103/PhysRevE.58.6418}, url = {The ideas of dynamical chaos have altered our understanding of the origin of random appearing behavior in man fields of physics and engineering. In the 1980s and 1990s these new viewpoints about apparent random oscillations arising in deterministic systems were investigated in neurophysiology and have led to quite successful reports of chaos in experimental and theoretical investigations. This paper is a "view" paper addressing the role of chaos in living systems, not just reviewing the evidence for its existence, and in particular we ask about the utility of chaotic behavior in nervous systems. From our point of view chaotic oscillations of individual neurons may nor be essential for the observed activity of neuronal assemblies bur may, instead, be responsible for the multitude of regular regimes of operation that can be accomplished by elements which are chaotic. The organization of chaotic elements in assemblies where their synchronization can result in organized adaptive and reliable activities may lead to general principles used by nature in accomplishing critical functional goals. (C) 1998 IBRO. Published by Elsevier Science Ltd.

}, keywords = {chaos, coupled neurons, model, neurons, oscillations, self-regularization, variability, visual-cortex}, isbn = {0306-4522}, doi = {10.1016/s0306-4522(98)00091-8}, url = {Measurements of a physical or biological system result in a time series, s(t)=s(t0+nτs)=s(n) sampled at intervals of τs and initiated at t0. When a signal can be represented as a superposition of sine waves with different amplitudes, its characteristics can be adequately described by Fourier coefficients of amplitude and phase. In these circumstances, linear and Fourier based methods for extracting information from the signal are appropriate and powerful. However, the signal may be generated by a nonlinear system. The waveform can be irregular and continuous and broadband in the frequency domain. The signal is noise-like, but is deterministic and may be chaotic. More information than the Fourier coefficients is required to describe the signal. This article describes methods for distinguishing chaotic signals from noise, and how to utilize the properties of a chaotic signal for classification, prediction, and control

}, keywords = {dimension, information, lyapunov exponents, strange attractors, systems, time-series, turbulence}, isbn = {1053-5888}, doi = {10.1109/79.671131}, url = {We report on the nonlinear analysis of electroencephalogram (EEG) recordings in the rabbit visual cortex. Epileptic seizures were induced by local penicillin application and triggered by visual stimulation. The analysis procedures for nonlinear signals have been developed over the past few years and applied primarily to physical systems. This is an early application to biological systems and the first to EEG data. We find that during epileptic activity, both global and local embedding dimensions are reduced with respect to nonepileptic activity. Interestingly, these values are very low (d(E) approximate to 3) and do not change between preictal and tonic stages of epileptic activity, also the Lyapunov dimension remains constant. However, between these two stages the manifestations of the local dynamics change quite drastically, as can be seen, e.g., from the shape of the attractors. Furthermore, the largest Lyapunov exponent is reduced by a factor of about two in the second stage and characterizes the difference in dynamics. Thus, the occurrence of clinical symptoms associated with the tonic seizure activity seems to be mainly related to the local dynamics of the nonlinear system. These results thus seem to give a strong indication that the dynamics remains much the same in these stages of behavior, and changes are due to alterations in model parameters and consequent bifurcations of the observed orbits.

}, keywords = {brain, dynamics, electroencephalogram, low-dimensional chaos, signals}, isbn = {0340-1200}, doi = {10.1007/s004220050410}, url = {We report experimental studies of synchronization phenomena in a pair of biological neurons that interact through naturally occurring, electrical coupling. When these neurons generate irregular bursts of spikes, the natural coupling synchronizes slow oscillations of membrane potential, but not the fast spikes. By adding artificial electrical coupling we studied transitions between synchrony and asynchrony in both slow oscillations and fast spikes. We discuss the dynamics of bursting and synchronization in living neurons with distributed functional morphology.

}, keywords = {clamp, lobster stomatogastric ganglion, models, synchronization}, isbn = {0031-9007}, doi = {10.1103/PhysRevLett.81.5692}, url = {We study chaotic ring laser systems as possible elements in a communications system. To be useful it must be possible to synchronize the transmitter and receiver lasers. We show that chaotic ring lasers can be synchronized using direct light injection from one laser into the optical cavity of the second. This synchronization occurs even when both lasers are quite high dimensional and each possesses many positive Lyapunov exponents. When the lasers are synchronized, the transmitted light can be modulated with information bearing signals and the message accurately recovered at the receiver.

}, keywords = {attractors, cavity, generalized synchronization, generation, instability, Lasers, system, transmitted light, unmasking}, isbn = {0031-9007}, doi = {10.1103/PhysRevLett.80.3153}, url = {The lobster stomatogastric ganglion contains 30 neurons and when modulated can produce two distinct rhythmic motor patterns-the gastric mill and the pyloric. The complete neural circuitry underlying both patterns is well known. Without modulatory input no patterns are produced, and the neurons fire tonically or are silent. When neuromodulators are released into the ganglion from specific neurons or are delivered as hormones, the properties of the neurons and synapses change dramatically and modulator-specific gastric mill and pyloric patterns are produced. In general the rhythmicity derives from the induced burstiness of the neurons, and the pattern from the strengths of the electrical and chemical synapses. The organized activity can be traced to a marked reduction of chaotic activity in individual neurons when they shift from the unmodulated to the modulated state.

}, keywords = {identified neurons, mechanisms, modulation, organization, pyloric network, selective inactivation, spiny lobster, synaptic-interactions, system, underlying pattern generation}, isbn = {0077-89231-57331-168-5}, doi = {10.1111/j.1749-6632.1998.tb09037.x}, url = {Using optimal control techniques we derive and demonstrate the use of an explicit single-step control method for directing a nonlinear system to a target orbit and keeping it there. We require that control values remain near the uncontrolled settings. The full nonlinearity of the problem in state space variables is retained. The {\textquoteright}{\textquoteright}one-step{\textquoteright}{\textquoteright} of the control is typically a composition of known or learned maps over (a) the time required to learn the state, (b) the time to compute the control and (c) the time to apply the control. No special targeting is required, yet the time to control is quite rapid. Working with the dynamics of a well-studied nonlinear electrical circuit, we show how this method works efficiently and accurately in two situations: when the known circuit equations are used, and when control is performed only on a Poincare section of the reconstructed phase space. In each case, because the control rule is known analytically, the control strategy is computationally efficient while retaining high accuracy. The target locations on the selected target trajectory at each control stage are determined dynamically by the initial conditions and the system dynamics, and the target trajectory is an approximation to an unstable periodic orbit of the uncontrolled system. A linear stability analysis shows that dissipation in the dynamical system is essential for reaching a controllable state. (C) 1997 Elsevier Science B.V.

}, keywords = {chaotic dynamics, controlling chaos, dynamical, embedding, one-step control, reconstruction, unstable periodic orbits}, isbn = {0167-6911}, doi = {10.1016/s0167-6911(97)00048-0}, url = {Synchronization of two chaotic systems is not guaranteed by having only negative conditional or transverse Lyapunov exponents, If there are transversally unstable periodic orbits or fixed points embedded in the chaotic set of synchronized motions, the presence of even very small disturbances from noise or inaccuracies from parameter mismatch can cause synchronization to break down and lead to substantial amplitude excursions from the synchronized state. Using an example of coupled one dimensional chaotic maps we discuss the conditions required for robust synchronization and study a mechanism that is responsible for the failure of negative conditional Lyapunov exponents to determine the conditions for robust synchronization.

}, keywords = {attractors, bifurcation, chaos, COMMUNICATION, dynamical-systems, intermittency, oscillators, periodic-orbits, stability, synchronization}, isbn = {1057-7122}, doi = {10.1109/81.633875}, url = {The results of neurobiological studies in both vertebrates and invertebrates lead to the general question: How is a population of neurons, whose individual activity is chaotic and uncorrelated able to form functional circuits with regular and stable behavior? What are the circumstances which support these regular oscillations? What are the mechanisms that promote this transition? We address these questions using our experimental and modeling studies describing the behavior of groups of spiking-bursting neurons. We show that the role of inhibitory synaptic coupling between neurons is crucial in the self-control of chaos.

}, keywords = {chaos, model, neural assemblies, self-control, synaptic coupling}, isbn = {1057-7122}, doi = {10.1109/81.633889}, url = {Using a low frequency nonlinear electrical circuit, se experimentally demonstrate an efficient nonlinear control method based on our theoretical developments. The method works in a state space for. the circuit which is reconstructed from observations of a single voltage. Assuming small control variations from the uncontrolled state, the method is fully nonlinear and {\textquoteright}{\textquoteright}one step{\textquoteright}{\textquoteright} optimal. It requires no knowledge of local state space linearizations of the dynamics near the target state. Starting from various initial states within the basin of attraction of the circuit attractor, we control to a period one and to a period two target orbit Each target orbit is an unstable periodic orbit of the uncontrolled system.

}, keywords = {chaos, control, nonlinear, optimal}, isbn = {1057-7122}, doi = {10.1109/81.633894}, url = {We investigate the variation of the out-of-phase periodic rhythm produced by two chaotic neurons (Hindmarsh-Rose neurons [J. L. Hindmarsh and R. M. Rose, Proc. R. Sec. London B 221, 87 (1984)]) coupled by electrical and reciprocally synaptic connections. The exploration of a two-parametric bifurcation diagram, as a function of the strength of the electrical and inhibitory coupling, reveals that the periodic rhythms associated to the limit cycles bounded by saddle-node bifurcations, undergo a strong variation as a function of small changes of electrical coupling. We found that there is a scaling law for the bifurcations of the limit cycles as a function of the strength of both couplings. From the functional point of view of this mixed typed of coupling, the small variation of electrical coupling provides a high sensitivity for period regulation inside the regime of out-of-phase synchronization.

}, keywords = {generation, model, modulation}, isbn = {1063-651X}, url = {The Nd:YAG laser with an intracavity second harmonic generating crystal is a versatile test bed for concepts of nonlinear time series analysis as well as for techniques that have been developed for control of chaotic systems. Quantitative comparisons of experimentally measured time series of the infrared light intensity are made with numerically computed time series from a model derived here from basic principles. These comparisons utilize measures that help to distinguish between low and high dimensional dynamics and thus enhance our understanding of the influence of noise sources on the emitted laser light.

}, keywords = {information, multimode laser, strange attractors, system}, isbn = {1063-651X}, doi = {10.1103/PhysRevE.55.6483}, url = {Ocean gravity waves measured at the Harvest Platform off the coast of California were analyzed using tools derived from methods of nonlinear dynamics. We compare these with laboratory data of random surface wave fields. The Harvest data show clear signs of low-dimensional dynamics in action and have an embedding dimension of six or seven. The largest Lyapunov exponent, a measure of future uncertainty, shows that the data are from a chaotic system and that predictions are limited to a horizon of about two major periods of the predominate wavelength. These results underscore the utility of nonlinear time domain methods for signal analysis and for extracting dynamical information not visible with linear methods.

}, keywords = {information, observed chaotic data, strange attractors}, doi = {10.1029/96jc02993}, url = {Experimental observations of the intracellular recorded electrical activity in individual neurons show that the temporal behavior is often chaotic We discuss both our own observations on a cell from the stomatogastric central pattern generator of lobster and earlier observations in other cells. In this paper we work with models of chaotic neurons, building on models by Hindmarsh and Rose for bursting, spiking activity in neurons. The key feature of these simplified models of neurons is the presence of coupled slow and fast subsystems. We analyze the model neurons using the same tools employed in the analysis of our experimental data. We couple two model neurons both electrotonically and electrochemically in inhibitory and excitatory fashions. In each of these cases, we demonstrate that the model neurons can synchronize in phase and out of phase depending on the strength of the coupling. For normal synaptic coupling, we have a time delay between the action of one neuron and the response of the other. We also analyze how the synchronization depends on this delay. A rich spectrum of synchronized behaviors is possible for electrically coupled neurons and for inhibitory coupling between neurons. In synchronous neurons one typically sees chaotic motion of the coupled neurons. Excitatory coupling produces essentially periodic voltage trajectories, which are also synchronized. We display and discuss these synchronized behaviors using two {\textquoteright}{\textquoteright}distance{\textquoteright}{\textquoteright} measures of the synchronization.

}, keywords = {systems}, isbn = {0899-7667}, doi = {10.1162/neco.1996.8.8.1567}, url = {Synchronization of chaotic oscillators in a generalized sense leads to richer behavior than identical chaotic oscillations in coupled systems. It may imply a more complicated connection between the synchronized trajectories in the state spaces of coupled systems. We suggest a method here that can be used to detect and study generalized synchronization in drive-response systems. This technique, the auxiliary system method, utilizes a second, identical response system to monitor the synchronized motions. The method can be implemented both numerically and experimentally and in some cases it leads to analytical results for generalized synchronization.

}, isbn = {1063-651X}, doi = {10.1103/PhysRevE.53.4528}, url = {Using a data analysis toolkit designed to explore data from nonlinear systems using time domain methods, it has been determined that the observed number of degrees of freedom in one pristine measurement of ocean ambient background pressure fluctuations or {\textquoteright}{\textquoteright}noise{\textquoteright}{\textquoteright} is nine. While the true number of degrees of freedom in the ocean is probably large, the observed measurements have nine degrees of freedom. There is a positive Lyapunov exponent, which identifies the system as being chaotic. The largest exponent is very large, indicating that the prediction horizon for these data is confined to a few samples. Determination of the degrees of freedom is important for the construction of physical models and nonlinear noise reduction filters, which may be based on characteristics of the observed few degrees of freedom from the background acoustic source. The magnitude of the largest Lyapunov exponent provides a measure of confidence for signal state prediction and mitigation. (C) 1996 Acoustical Society of America.

}, keywords = {exponents, strange attractors, time-series}, isbn = {0001-4966}, doi = {10.1121/1.414730}, url = {A case study of the application of recent methods of nonlinear time series analysis is presented. The 1848-1992 biweekly time series of the Great Salt Lake (GSL) volume is analyzed for evidence of low dimensional dynamics and predictability. The spectrum of Lyapunov exponents indicates that the average predictability of the GSL is a few hundred days. Use of the false nearest neighbor statistic shows that the dynamics of the GSL can be described in time delay coordinates by four dimensional vectors with components lagged by about half a year. Local linear maps are used in this embedding of the data and their skill in forecasting is tested in split sample mode for a variety of GSL conditions: lake average volume, near the beginning of a drought, near the end of a drought, prior to a period of rapid lake rise. Implications for modeling low frequency components of the hydro-climate system are discussed.

}, keywords = {dimension, strange attractors}, isbn = {0930-7575}, doi = {10.1007/bf00219502}, url = {Using methods from nonlinear dynamics, we examine a long climatological record of measurements of the volume of the Great Salt Lake in Utah. These observations, recorded every 15 days since 1847, provide direct insight into the effect of large-scale atmospheric motions in climatological studies. The lake drains nearly 100,000 km(2), and it thus acts as a spatial filter for the finest degrees of freedom for climate. In filtering out a very large number of atmospheric and climatological motions, it reduces its complexity but retains its effectiveness as a climate sensing system. We demonstrate that there are four degrees of freedom active in the Great Salt Lake volume record, that these data reside on a strange attractor of dimension slightly larger than three, and that these data are predictable with a horizon of order a few years. We then show that predictive models based on local properties on the attractor perform remarkably well in reproducing the observations when trained on earlier observations. The ability to predict using earlier observations on the attractor suggests very strongly that over the period of the record, the system has been stationary and that it is a record of the natural variation of the climate. If there is anthropomorphic influence leading to changes in climate, this record suggests it has not made its effect measurable in such large-scale integrating observations. (C) 1996 Elsevier Science Ltd.

}, keywords = {atlantic, temperature-fluctuations, time-series}, isbn = {0360-5442}, doi = {10.1016/0360-5442(96)00018-7}, url = {Data from an underwater continuous wave signal are analyzed using methods derived from the study of chaotic systems. Distortions by the environment effectively add three degrees of freedom (dimensions) to the signal. The dimension required to reconstruct and analyze the received data is five in contrast to a pure tonal or sine wave that only requires two dimensions. The data are examined with a chaotic data analysis toolkit that includes determination of degrees of freedom using local and global false-nearest-neighbor statistics, average mutual information, correlation dimension, and local Lyapunov exponents. These results are important for the development of methods for correction of propagation distortions and nonlinear noise reduction algorithms. Application of this knowledge should lead to improvements in the detection and classification of underwater signals. (C) 1996 Acoustical Society of America.

}, keywords = {exponents, strange attractors, time-series}, isbn = {0001-4966}, doi = {10.1121/1.414497}, url = {We study the possibility that variations in the volume of the Great Salt Lake (GSL), a large, closed basin lake, may be described as a low-dimensional nonlinear dynamical system. There is growing evidence for structure in the recurrence patterns of climatic fluctuations that drive western United States hydrology. Moreover, the time behavior of such lakes is generally more regular than that of the climatic forcing. This suggests the possibility that an analysis of the 144-year, biweekly time series of the GSL volume may shed some light on the underlying dynamics of lake variations. Three methods (correlation dimension, nearest neighbor dimension, and false neighbor dimension) of estimating attractor dimension are applied and compared. The analysis suggests that the GSL dynamics may be described by a dimension of about four. Implications of such analyses relative to low-frequency variations and colored noise and limitations of such analyses are discussed.

}, keywords = {climate attractors, deterministic chaos, near-neighbor information, rainfall, short timescales, small data sets, strange attractors, systems, time-series, weather attractor}, isbn = {0043-1397}, doi = {10.1029/95wr02872}, url = {We report on the analysis of experiments on a neodymium-doped yttrium aluminum garnet laser with an intracavity frequency-doubling crystal. Three modes of the laser were excited in differing polarization configurations. The total intensity of infrared light was observed and then analyzed using methods of nonlinear-time-series analysis. We present clear evidence using global false nearest neighbors that when all polarizations are parallel, the intensity is chaotic with two positive Lyapunov exponents and the system can be embedded in dimension 7. The noise level in this operating condition, which we call type I chaos, is small. When one of the polarizations is perpendicular to the others, the intensity is again chaotic with positive Lyapunov exponents, but there is substantial noise in the signal of high dimensional origin, and no finite embedding dimension appears possible. We call this type II chaos. We suggest that the origin of this phenomenon is the intrinsic quantum noise associated with the generation of green light, which is 25 times more intense in the type II operating configuration than in the first. In past experiments with this system we have found that the type I chaos can be controlled to unstable periodic orbits while type II cannot. In each type of chaotic laser operation we use local false nearest neighbors to demonstrate that the local dimension of the dynamics is 7. This means seven differential equations can capture the full dynamics of these regimes of the laser. We evaluate local and global false nearest neighbors to support our conclusions and determine the Lyapunov spectrum of each type of chaotic behavior. The predictability of type II chaos is shown to be much less than that of type I, and we make local polynomial models in reconstructed-state space to demonstrate that we can predict rather well for type I chaos. Finally we suggest a fairly standard model for the interaction of the infrared light with the nonlinear frequency doubling medium and with a two-level of the active medium.

}, keywords = {information, strange attractors, system}, isbn = {1050-2947}, doi = {10.1103/PhysRevA.53.440}, url = {Variations in the volume of closed basin lakes, such as the Great Salt Lake, are often driven by large-scale, persistent climatic fluctuations. There is growing evidence of structure in the recurrence patterns of such fluctuations, their relation to physical mechanisms, and their manifestation in hydrologic time series. Classical, linear methods for time series analysis and forecasting may be inappropriate for modeling such processes, Here we consider the time series of interest as the outcome of a finite-dimensional, nonlinear dynamical system and use nonparametric regression to recover the nonlinear, autoregressive {\textquoteright}{\textquoteright}skeleton{\textquoteright}{\textquoteright} of the underlying dynamics. The resulting model can be used for short-term forecasting, as well as for exploring other properties of the system. The utility of the approach is demonstrated with synthetic periodic data and data from low dimensional, chaotic, dynamical systems. An application to the 1847-1992 Great Salt Lake biweekly volume time series is also reported.

}, keywords = {chaotic time-series, nino southern oscillation, patterns, prediction}, isbn = {0043-1397}, doi = {10.1029/95wr03402}, url = {Construction of a dynamical theory of neural assemblies has been a goal of physicists, mathematicians and biologists for many years now. Experimental achievements in modern neurobiology have allowed researchers to approach this goal. Significant advances have been made for small neural networks, which are generators of the rhythmic activities of living organisms. The subject of the present review is the problem of synchronisation, one of the major aspects of the dynamical theory. It is shown that synchronisation plays a key role in the activity of both minimal neural networks (neural pair) and neural ensembles with a large number of elements (cortex).

}, keywords = {cat visual-cortex, central pattern generator, cortical spreading depression, coupled nonlinear, cross-correlation, dependent neuronal oscillations, intersegmental coordination, lamprey spinal-cord, mathematical-model, oscillators, stomatogastric ganglion}, isbn = {0042-1294}, url = {We theoretically examine the consequences of modulating an external-cavity semiconductor laser around its mode-locking resonant frequency. When the modulation frequency is below resonance, the laser exhibits a three-frequency route to chaos. When the modulation frequency is above resonance, the laser oscillates in two- and three-frequency states. The chaotic instability is a result of the nonlinear interaction of three periodic modes of the laser system. These modes are dynamical manifestations of the composite cavity mode-locking resonance, the applied field that is due to the modulation, and the laser relaxation oscillation.

}, keywords = {dynamics, injection, locking, optical feedback, strange attractors}, isbn = {0740-3224}, doi = {10.1364/josab.12.001150}, url = {Building a source of low-dimensional chaotic signals with specified properties poses new challenges in the study of nonlinear systems. We address this problem using two synchronized asymmetrically coupled chaotic systems. As an uncoupled oscillator, each produces chaotic signals with distinctively different properties. The chaotic signal produced by the synchronized oscillators possesses properties intermediate with respect to the two original signals. The properties of this synthesized signal can be controlled by the ratio of the coupling parameters.

}, keywords = {systems}, isbn = {1063-651X}, doi = {10.1103/PhysRevE.52.214}, url = {Synchronization of chaotic systems is frequently taken to mean actual equality of the variables of the coupled systems as they evolve in time. We explore a generalization of this condition, which equates dynamical variables from one subsystem with a function of the variables of another subsystem. This means that synchronization implies a collapse of the overall evolution onto a subspace of the system attractor in full space. We explore this idea in systems where a response system y(t) is driven with the output of a driving system x(t), but there is no feedback to the driver. We lose generality but gain tractability with this restriction. To investigate the existence of the synchronization condition y(t)=φ(x(t)) we introduce the idea of mutual false nearest neighbors to determine when closeness in response space implies closeness in driving space. The synchronization condition also implies that the response dynamics is determined by the drive alone, and we provide tests for this as well. Examples are drawn from computer simulations on various known cases of synchronization and on data from nonlinear electrical circuits. Determining the presence of generalized synchronization will be quite important when one has only scalar observations from the drive and from the response systems since the use of time delay (or other) embedding methods will produce {\textquoteleft}{\textquoteleft}imperfect{\textquoteright}{\textquoteright} coordinates in which strict equality of the synchronized variables is unlikely to transpire.

}, isbn = {1063-651X}, doi = {10.1103/PhysRevE.51.980}, url = {Data from experiments on the turbulent boundary layer around an axisymmetric vehicle rising under its own buoyancy are described in detail and analyzed using tools developed in nonlinear dynamics. Arguments are given that in this experiment the size of the wall mounted pressure sensors would make the data sensitive to the dynamics of about ten or so coherent structures in the turbulent boundary layer. Analysis of a substantial number of large, well sampled data sets indicates that the (integer) dimension of the embedding space required to capture the dynamics of the observed flows in the laminar regime is very large. This is consistent with there being no pressure fluctuations expected here and the signal being dominated by instrumental {\textquoteright}{\textquoteright}noise.{\textquoteright}{\textquoteright} In a consistency check we find that data from the ambient state of the vehicle before buoyant rise occurs and data from an accelerometer mounted in the prow are also consistent with this large dimension. The time scales in those data are also unrelated to fluid dynamic phenomena. In the transition and turbulent regions of the flow we find the pressure fluctuation time scales to be consistent with those of the fluid flow (about 250 musec) and determine the dimension required for embedding the data to be about 7-8 for the transitional region and about 8-9 for the turbulent regime. These results are examined in detail using both global and local false nearest-neighbor methods as well as mutual information aspects of the data. The results indicate that the pressure fluctuations are determined in these regimes by the coherent structures in the turbulent boundary layer. Applications and further investigations suggested by these results are discussed.

}, keywords = {chaos, dimension, information, strange attractors, turbulent boundary-layer}, isbn = {1063-651X}, doi = {10.1103/PhysRevE.49.4003}, url = {In time-delay reconstruction of chaotic attractors we can accurately predict the short-term future behavior of the observed variable x(t) = x(n) = x(t0 + tau(s)n) without prior knowledge of the equations of motion by building local or global models in the state space. In many cases we also want to predict variables other than the one which is observed and require methods for determining models to predict these variables in the same space. We present a method which takes measurements of two variables x(n) and z(n) and builds models for the determination of z(n) in the phase-space made out of the x(n) and its time lags. Similarly we show that one may produce models for x(n) in the z(n) space, except where special symmetries prevent this, such as in the familiar Lorenz model. Our algorithm involves building local polynomial models in the reconstructed phase space of the observed variable of low order (linear or quadratic) which approximate the function z(n) = F(x(n)) where x(n) is a vector constructed from a sequence of values of observed variables in a time delay fashion. We train the models on a partial data set of measured values of both x(n) and z(n) and then predict the z(n) in a recovery set of observations of x(n) alone. In all of our analyses we assume that the observed data alone are available to us and that we possess no knowledge of the dynamical equations. We test this method on the numerically generated data set from the Lorenz model and also on a number of experimental data sets from electronic circuits.

}, keywords = {chaos, dimension, information, series, space reconstruction, strange attractors}, isbn = {1063-651X}, doi = {10.1103/PhysRevE.49.1840}, url = {Using a form of linear feedback we call dissipative feedback control, we show how to use external forcing to control a chaotic dynamical system to a fixed point or an unstable periodic orbit when the location of the fixed point or unstable periodic orbit may change slowly with time. The ability to follow a desired state of the system by an external control even when that state is slowly varying in time we call tracking. This slow {\textquoteright}{\textquoteright}drift{\textquoteright}{\textquoteright} of states is the usual situation in actual experimental realizations of chaotic systems in nonlinear circuits and other physical manifestations, and this drift can be accounted for by providing a slow dynamics for the location of the fixed point or periodic orbit. We discuss the theoretical aspects this idea and show its feasibility in some experiments with nonlinear circuits with chaotic behavior.

}, keywords = {system}, isbn = {1063-651X}, doi = {10.1103/PhysRevE.50.314}, url = {Coherent structures in fluid boundary layers at high Reynolds numbers are a prominent feature of these flows. The structures appear as concentrations of vorticity into {\textquoteright}{\textquoteright}hairpin{\textquoteright}{\textquoteright} and other shapes. We explore the inviscid interaction and stability of vortex filaments initially situated spanwise to the mean flow in a model of a boundary layer. Both for a single vortex filament and its image through the boundary and for an infinite line of such filaments with their images we find a linear instability associated with deformations of the filament along its length with maximum instability having a wavelength on the order of the height of the filament above the boundary. The linear unstable manifold for this instability points at approximately 45-degrees from the plane of the boundary in accord with experimental observations and numerical modeling of these coherent structures. This provides a dynamical origin to the observations of the orientation of these coherent structures.

}, keywords = {flow, motion, vortices}, isbn = {1063-651X}, doi = {10.1103/PhysRevE.50.1206}, url = {Using tools for the analysis of nonlinear chaotic observations, we have determined the embedding dimension required for data from measurements of wall pressure fluctuations in high Reynolds number boundary layer flow around an axisymmetric body. The Reynolds number based on the boundary layer size is as large as 10(5), and the number of degrees of freedom excited in the fluid is certainly very large. Yet the active degrees of freedom seen by the pressure sensors in the transitional and turbulent parts of the flow are as low as 8 to 10. We discuss the source of these results in terms of sensor size and the known coherent structures in these shear flows.

}, keywords = {dimension, flow}, isbn = {0031-9007}, doi = {10.1103/PhysRevLett.72.2383}, url = {Chaotic time series data are observed routinely in experiments on physical systems and in observations in the field. The authors review developments in the extraction of information of physical importance from such measurements. They discuss methods for (1) separating the signal of physical interest from contamination ({\textquoteright}{\textquoteright}noise reduction{\textquoteright}{\textquoteright}), (2) constructing an appropriate state space or phase space for the data in which the full structure of the strange attractor associated with the chaotic observations is unfolded, (3) evaluating invariant properties of the dynamics such as dimensions, Lyapunov exponents, and topological characteristics, and (4) model making, local and global, for prediction and other goals. They briefly touch on the effects of linearly filtering data before analyzing it as a chaotic time series. Controlling chaotic physical systems and using them to synchronize and possibly communicate between source and receiver is considered. Finally, chaos in space-time systems, that is, the dynamics of fields, is briefly considered. While much is now known about the analysis of observed temporal chaos, spatio-temporal chaotic systems pose new challenges. The emphasis throughout the review is on the tools one now has for the realistic study of measured data in laboratory and field settings. lt is the goal of this review to bring these tools into general use among physicists who study classical and semiclassical systems. Much of the progress in studying chaotic systems has rested on computational tools with some underlying rigorous mathematics. Heuristic and intuitive analysis tools guided by this mathematics and realizable on existing computers constitute the core of this review.

}, keywords = {analysis, chaos, correlation dimension, deterministic, dynamical-systems, local predictability, lyapunov exponents, noise-reduction method, ordinary differential-equations, periodic-orbits, strange attractors, time-series}, isbn = {0034-6861}, doi = {10.1103/RevModPhys.65.1331}, url = {Every chaotic attractor is thought to be composed of the collection of unstable manifolds of all its unstable periodic orbits (UPO{\textquoteright}s). These UPO{\textquoteright}s can be used for communication by amplitude or phase modulation just as one would modulate stable periodic orbits (resonant frequencies of a linear system or limit cycles in a nonlinear system). Using UPO{\textquoteright}s for communications has a variety of benefits. Since a chaotic attractor contains an infinite number of UPO{\textquoteright}s, a single time series can carry many different messages by placing each one on a different UPO. Because UPO{\textquoteright}s are topological properties of a chaotic attractor, the signal has a natural noise immunity, and receipt of a should be robust to moderate differences between the transmitter and receiver parameters.

}, isbn = {1057-7130}, doi = {10.1109/82.246165}, url = {We examine experimentally the consequence of frequency detuning an actively mode-locked external-cavity semiconductor laser from resonance. We observe a transition of the laser system from a periodic oscillation to a nonperiodic state with broadened spectral tones. By estimating the fractal dimension of the corresponding phase-space attractors, we show the presence of low-dimensional chaos. The route to chaos is a well-defined regime of three-frequency quasi-periodicity preceded by a two-frequency quasi-periodicity.

}, keywords = {dimension, ghz, locked laser, oscillations, pulse-train instabilities, strange attractors, transmission}, isbn = {0740-3224}, doi = {10.1364/josab.10.002065}, url = {Conservation of potential vorticity in Eulerian fluids reflects particle interchange symmetry in the Lagrangian fluid version of the same theory. The algebra associated with this symmetry in the shallow-water equations is studied here, and we give a method for truncating the degrees of freedom of the theory which preserves a maximal number of invariants associated with this algebra. The finite-dimensional symmetry associated with keeping only N modes of the shallow-water flow is SU(N). In the limit where the number of modes goes to infinity (N --\> infinity) all the conservation laws connected with potential vorticity conservation are recovered. We also present a Hamiltonian which is invariant under this truncated symmetry and which reduces to the familiar shallow-water Hamiltonian when N --\> infinity. All this provides a finite-dimensional framework for numerical work with the shallow-water equations which preserves not only energy and enstrophy but all other known conserved quantities consistent with the finite number of degrees of freedom. The extension of these ideas to other nearly two-dimensional flows is discussed.

}, keywords = {symplectic integration}, isbn = {1063-651X}, doi = {10.1103/PhysRevE.48.3643}, url = {The time delay reconstruction of the state space of a system from observed scalar data requires a time lag and an integer embedding dimension. The minimum necessary global embedding dimension d(E) may still be larger than the actual dimension of the underlying dynamics d(L). The embedding theorem only guarantees that the attractor of the system is fully unfolded using d(E) greater than 2d(A), with d(A) the fractal attractor dimension. Using the idea of local false nearest neighbors, we discuss methods for determining the integer-valued d(L).

}, keywords = {information, ring cavity, strange attractors, system, time-series}, isbn = {1063-651X}, doi = {10.1103/PhysRevE.47.3057}, url = {We examine the issue of determining an acceptable minimum embedding dimension by looking at the behavior of near neighbors under changes in the embedding dimension from d --\> d + 1. When the number of nearest neighbors arising through projection is zero in dimension d(E), the attractor has been unfolded in this dimension. The precise determination of d(E) is clouded by "noise," and we examine the manner in which noise changes the determination of d(E). Our criterion also indicates the error one makes by choosing an embedding dimension smaller than d(E). This knowledge may be useful in the practical analysis of observed time series.

}, keywords = {chaotic time-series, dynamic system consistent, information, noise-reduction, ring cavity, strange attractors}, isbn = {1050-2947}, doi = {10.1103/PhysRevA.45.3403}, url = {The methods for establishing a phase space or state space for a physical system from measurements of a scalar time series are reviewed, and the implications for the analysis of signals of the geometry of strange attractors are discussed. Classifying the physical systems by invariants on the attractor is seen to be the analog of classifying linear systems by the resonant response frequencies. The statistical properties of deterministic systems are used to perform this classifying. Methods for separating a chaotic signal of interest from contamination such as high dimensional noise are discussed. An example of a signal from a nonlinear circuit whose equations of evolution are unknown is discussed in detail and all the items mentioned are illustrated.

}, isbn = {0-7803-0532-9}, doi = {10.1109/icassp.1992.226473}, url = {We develop methods for determining local Lyapunov exponents from observations of a scalar data set. Using average mutual information and the method of false neighbors, we reconstruct a multivariate time series, and then use local polynomial neighborhood-to-neighborhood maps to determine the phase space partial derivatives required to compute Lyapunov exponents. In several examples we demonstrate that the methods allow one to accurately reproduce results determined when the dynamics is known beforehand. We present a new recursive QR decomposition method for finding the eigenvalues of products of matrices when that product is severely ill conditioned, and we give an argument to show that local Lyapunov exponents are ambiguous up to order 1/L in the number of steps due to the choice of coordinate system. Local Lyapunov exponents are the critical element in determining the practical predictability of a chaotic system, so the results here will be of some general use.

}, keywords = {experimental chaotic time series, global, information, local lyapunov exponents, local predictability of chaos, predictability, spectrum, strange attractors, system, time-series}, isbn = {0938-8974}, doi = {10.1007/bf01208929}, url = {We review the idea of Lyapunov exponents for chaotic systems and discuss their evaluation from observed data alone. These exponents govern the growth or decrease of small perturbations to orbits of a dynamical system. They are critical to the predictability of models made from observations as well as known analytic models. The Lyapunov exponents are invariants of the dynamical system and are connected with the dimension of the system attractor and to the idea of information generation by the system dynamics. Lyapunov exponents are among the many ways we can classify observed nonlinear systems, and their appeal to physicists remains their clear interpretation in terms of system stability and predictability. We discuss the familiar global Lyapunov exponents which govern the evolution of perturbations for long times and local Lyapunov exponents which determine the predictability over a finite number of time steps.

}, isbn = {0217-9792}, doi = {10.1142/s021797929100064x}, url = {We examine the question of accurately determining, from an observed time series, the Lyapunov exponents for the dynamical system generating the data. This includes positive, zero, and some or all of the negative exponents. We show that even with very large data sets, it is clearly advantageous to use local neighborhood-to-neighborhood mappings with higher-order Taylor series, rather than just local linear maps as has been done previously. We give examples using up to fifth-order polynomials. We demonstrate this procedure on two familiar maps and two familiar flows: the Henon and Ikeda maps of the plane to itself, the Lorenz system of three ordinary differential equations, and the Mackey-Glass delay differential equation. We stress the importance of maintaining two dimensions for converting the scalar data into time delay vectors: one is a global dimension to ensure proper unfolding of the attractor as a whole, and the other is a local dimension for capturing the local dynamics on the attractor. We show the effects of changing the local and global dimensions, changing the order of the mapping polynomial, and additive (measurement) noise. There will always be some limit to the number of exponents that can be accurately determined from a given finite data set. We discuss a method of determining this limit by numerically obtaining the singularity spectra of the data set and also show how it is often appropriate to make this choice based on the fractal dimension of the attractor. If excessively large dimensions are used, spurious exponents will be generated, and in some cases the accuracy of the true exponents will be affected. We present methods of identifying these spurious exponents by determining the Lyapunov direction vectors at particular points in the data set. We can then use these to identify numerical problems and to associate data-set singularities with particular exponents. The behavior of spurious exponents in the presence of noise is also investigated, and found to be different from that of the true exponents. These provide methods for identifying spurious exponents in the analysis of experimental data where the system dynamics may not be known a priori.

}, keywords = {chaos, critical exponent, information, strange attractors}, isbn = {1050-2947}, doi = {10.1103/PhysRevA.43.2787}, url = {A Hamiltonian representation for the interaction between vortical and internal wave motion in an inviscid, stratified fluid in a rotating frame is formulated. A Lagrangian and compressible representation of the flow is used in order to avoid difficulties associated with non-canonical coordinates and constrained systems. A quadratic Hamiltonian for the linearized motion around a state of no flow is derived. Canonical transformations are then employed in order to isolate normal mode coordinates and write the Hamiltonian as a sum of independent harmonic oscillators. The three possible modes of linearized motion are the acoustic, internal wave and potential vorticity carrying motions. The acoustic modes are of little interest in the problem at hand and are retained in order to avoid difficulties associated with the incompressibility constraint. We discuss ways in which these modes may be eliminated or rendered harmless. The coordinate corresponding to the potential vorticity carrying mode is absent from the quadratic Hamiltonian as expressed in normal mode coordinates, so that its conjugate momentum is conserved. This conjugate momentum is precisely the linearized potential vorticity. Future investigations are suggested. First, the normal mode coordinates may be used in a study of the nonlinear problem, the advantage being that the various modes and their energies are identified so that Hamiltonian perturbation theory is available. Second, since the full potential vorticity is only a quadratic function of the normal mode coordinates in Lagrangian variables, it may be possible for it to be used as a conserved momentum to some canonical coordinate. In this way, one could identify unambiguously, the potential vorticity carrying part of the flow in the fully nonlinear problem.

}, keywords = {gravity-waves, hamiltonian dynamics, internal waves, nonlinear stability analysis, vortical motions}, isbn = {0309-1929}, doi = {10.1080/03091929108227775}, url = {We examine the question of accurately determining Lyapunov exponents for a time series. We find that it is advantageous to use local mappings with higher-order Taylor series, rather than linear maps as done earlier. We demonstrate this procedure for the Ikeda map and the Lorenz system. We present methods for identifying spurious exponents by analyzing data-set singularities and by determining the Lyapunov direction vectors. The behavior of spurious exponents in the presence of noise is also investigated, and found to be different from that of the true exponents.

}, isbn = {0031-9007}, doi = {10.1103/PhysRevLett.65.1523}, url = {We consider the problem of prediction and system identification for time series having broadband power spectra that arise from the intrinsic nonlinear dynamics of the system. We view the motion of the system in a reconstructed phase space that captures the attractor (usually strange) on which the system evolves and give a procedure for constructing parametrized maps that evolve points in the phase space into the future. The predictor of future points in the phase space is a combination of operation on past points by the map and its iterates. Thus the map is regarded as a dynamical system and not just a fit to the data. The invariants of the dynamical system, the Lyapunov exponents and optimum moments of the invariant density on the attractor, are used as constraints on the choice of mapping parameters. The parameter values are chosen through a constrained least-squares optimization procedure, constrained by the values of these invariants. We give a detailed discussion of methods to extract the Lyapunov exponents and optimum moments from data and show how to equate them to the values for the parametric map in the constrained optimization. We also discuss the motivation and methods we utilize for choosing the form of our parametric maps. Their form has a strong similarity to the work in statistics on kernel density estimation, but the goals and techniques differ in detail. Our methodology is applied to "data" from the H{\'e}non map and the Lorenz system of differential equations and shown to be feasible. We find that the parameter values that minimize the least-squares criterion do not, in general, reproduce the invariants of the dynamical system. The maps that do reproduce the values of the invariants are not optimum in the least-squares sense, yet still are excellent predictors. We discuss several technical and general problems associated with prediction and system identification on strange attractors. In particular, we consider the matter of the evolution of points that are off the attractor (where few or no data are available), onto the attractor where long-term motion takes place. We find that we are able to realize maps that give a least-squares approximation to the data with rms variation over the attractor of 0.5\% or less and still reproduce the dynamical invariants to 5\% or better. The dynamical invariants are the classifiers of the dynamical system producing the broadband time series in the first place, so this quality of the maps is essential in representing the correct dynamics.

}, isbn = {1050-2947}, doi = {10.1103/PhysRevA.41.1782}, url = {We consider the problem of prediction and system identification for time series having broadband power spectra which arise from the intrinsic nonlinear dynamics of the system. We view the motion of the system in a reconstructed phase space which captures the attractor (usually strange) on which the system evolves, and give a procedure for constructing parameterized maps which evolve points in the phase space into the future. The predictor of future points in the phase space is a combination of operation on past points by the map and its iterates. Thus the map is regarded as a dynamical system, not just a fit to the data. The invariants of the dynamical system {\textemdash} the Lyapunov exponents and aspects of the invariant density on the attractor {\textemdash} are used as constraints on the choice of mapping parameters. The parameter values are chosen through a least-squares optimization procedure. The method is applied to {\textquotedblleft}data{\textquotedblright} from the H{\'e}non map and shown to be feasible. It is found that the parameter values which minimize the least-squares criterion do not, in general, reproduce the invariants of the dynamical system. The maps which do reproduce the values of the invariants are not optimum in the least-squares sense, yet still are excellent predictors. We discuss several technical and general problems associated with prediction and system identification on strange attractors. In particular, we consider the matter of the evolution of points that are off the attractor (where little or no data is available), onto the attractor, where long-term motion takes place.

}, isbn = {0375-9601}, doi = {10.1016/0375-9601(89)90839-6}, url = {The formulation of the Hamiltonian structures for inviscid fluid flows with material free surfaces is presented in both the Lagrangian specification, where the fundamental Poisson brackets are canonical, and in the Eulerian specification, where the dynamics is given in noncanonical form. The noncanonical Eulerian brackets are derived explicitly from the canonical Lagrangian brackets. The Eulerian brackets are, with the exception of a single term at each material free surface separating flows in different phases, identical to those for isentropic flow of a compressible, inviscid fluid. The dynamics of the free surface is located in the Hamiltonian and in the definition of the Eulerian variables of mass density, ρ(x, t), momentum density, M(x,t) [which is ρ times the fluid velocity v(x,t)], and the specific entropy, σ(x,t). The boundary conditions for the Eulerian variables and the evolution equations for the free surfaces come from the Euler equations of the flow. This construction provides a unified treatment of inviscid flows with any number of free surfaces.

}, isbn = {1070-6631}, doi = {10.1063/1.866987}, url = {We give a constructive method to find a hamiltonian and Poisson brackets for any smooth vector field. The construction is local. We exhibit the hamiltonian and Poisson brackets for the damped, one dimensional, simple harmonic oscillator, wherein one can see the global problems. We suggest how one might usefully employ the construction in physical settings.

}, isbn = {0375-9601}, doi = {10.1016/0375-9601(87)90638-4}, url = {A formulation of inviscid fluid dynamics based on the density F(x , v , t) in a s i n g l e-p a r t i c l e p h a s e s p a c e [x=(x 1,x 2,x 3), v=(v 1,v 2,v 3)] is presented. This density evolves in time according to a Poisson bracket of F with H(x,v,t){\textemdash}a Hamiltonian in the same single-particle phase space. Compressible flows of barotropic fluid and homogeneous, incompressible flows are disscussed. The main advantage of the phase space density formulation over either Euler or Lagrange formulations is the algebraic and conceptual ease in making fully Hamiltonian approximations to the flow by altering H(x,v,t) and the Poisson brackets appropriately. The example of a shallow layer of rapidly rotating fluid where a Rossby number expansion is desired will be discussed in some detail. Changes of phase space coordinates that give an approximate H (expanded in Rossby number) and e x a c t Poisson brackets will be exhibited. The resulting quasigeostrophic equations for F are two-dimensional partial differential equations to every order in Rossby number. The extension to multiple layers will be presented.

}, isbn = {1070-6631}, doi = {10.1063/1.866073}, url = {Using the energy-conserved quantity method developed by Arnol{\textquoteright}d [Dokl. Mat. Nauk 1 6 2, 773 (1965); Am. Math. Soc. Trans. 1 9, 267 (1969)] a study was made of the nonlinear stability of two inviscid fluid flows in three dimensions: (1) flow of a homogeneous fluid and (2) flow of a fluid whose energy density depends on the mass density alone (a so-called barotropic fluid). In order to implement the Arnol{\textquoteright}d technique one must identify the quantities conserved by the flow in addition to the total energy. In the case of the two flows considered, the conserved quantities cannot be expressed in terms of the usual Eulerian variables{\textemdash}fluid velocity and mass density{\textemdash}alone. Instead the introduction of the Lagrangian labels of the fluid elements is required. A complete description of these conserved quantities, in both Eulerian and Lagrangian specifications of the fluid, is provided. The phase space of the flow is the entire Hamiltonian phase space expressed in either canonical or noncanonical variables. The nature of the flows to which the Arnol{\textquoteright}d method is applicable is discussed in some depth. It was discovered that only time independent Eulerian flows can be discussed by the method; this result is given a general Hamiltonian context. The allowed Eulerian equilibria are displayed in detail. Finally, having the formal structure of these flows well in hand, it is shown that they are not, in three dimensions, formally stable. This results from a particle vortex stretching mechanism which is identified. The nature of the indicated instability is not revealed by this work, but may well be a slowly evolving Arnol{\textquoteright}d diffusion kind of breakdown of the equilibrium.

}, isbn = {1070-6631}, doi = {10.1063/1.866469}, url = {Nonlinear stability is analysed for stationary solutions of incompressible inviscid stratified fluid flow in two and three dimensions. Both the Euler equations and their Boussinesq approximations are treated. The techniques used were initiated by Arnold around 1965. These techniques combine energy methods, conserved quantities and convexity estimates. The resulting nonlinear stability criteria involve standard quantities, such as the Richardson number, but they differ from the linearized stability criteria. For example, the full three-dimensional problem has nonlinearly stable stationary solutions with Richardson number greater than unity, provided the gradients of the variations in density satisfy explicitly given bounds. Specific examples and the associated Hamiltonian structures for the problems are given.

}, doi = {10.1098/rsta.1986.0078}, url = {Geostrophic flow in the theory of a shallow rotating fluid is exactly analogous to the drift approximation in a strongly magnetized electrostatic plasma. This analogy is developed and exhibited in detailed to derive equations for the slow nearly geostrophic motion. The key ingredient in the theory is the isolation, to whatever order in Rossby number desired, of the fast motion near the inertial frequency. One of the remaining degrees of freedom represents a new approximate constant of the motion for nearly geostrophic flow. This is the analogue of the familiar magnetic moment adiabatic invariant in the plasma problem. The procedure is a Rossby number expansion of the Hamiltonian for the fluid expressed in Lagrangian, rather than Eulerian variables. The fundamental Poisson brackets of the theory are not expanded so desirable properties such as energy conservation are maintained throughout.

}, isbn = {0309-1929}, doi = {10.1080/03091928508245427}, url = {With use of a method of Arnol{\textquoteright}d, we derive the necessary and sufficient conditions for the formal stability of a parallel shear flow in a three-dimensional stratified fluid. When the local Richardson number defined with respect to density variations is everywhere greater than unity, the equilibrium is formally stable under nonlinear pertrubations. The essential physical content of the nonlinear stability result is that the total energy acts as a "potential well" for deformations of the fluid across constant density surfaces; this well is required to have definite curvature to assure stability under these deformations.

}, isbn = {0031-9007}, doi = {10.1103/PhysRevLett.52.2352}, url = {A simple analytical decay law for correlation functions of periodic, area-preserving maps is obtained. This law is compared with numerical experiments on the standard map. The agreement between experiment and theory is good when islands are absent, but poor when islands are present. When islands are present, the correlations have a long, slowly decaying tail.

}, isbn = {0167-2789}, doi = {10.1016/0167-2789(83)90019-2}, url = {We argue that the universality and statistical nature of the deep-ocean internal gravity-wave spectrum results from a strange attractor in the driven, dissipative internal-wave field. To explore this we construct a model which injects energy into the oceanic surface at a constant rate. A two-dimensional version of the model is explored analytically and numerically. For the numerical work we restrict our considerations to a few of the longest-wavelength modes. This few-mode system exhibits bifurcation into limit cycles, period doubling of the limit cycles, and chaotic, non-periodic behaviour associated with a strange attractor. In an appendix we present some discussion of the three-dimensional version of the model.

}, isbn = {0022-1120}, doi = {10.1017/s0022112083003158}, url = {We present a method for studying nonintegrable Hamiltonian systems H(I,θ) = H0(I) + kH1(I,θ) (I, θ are action-angle variables) in the regime of large k. Our central tool is the conditional probability P(I,θ,t | I0,θ0,t0) that the system is at I. θ at time t given that it resided at I0, θ0 at t0. An integral representation is given for this conditional probability. By discretizing the Hamiltonian equations of motion in small time steps, ϵ, we arrive at a phase volume-preserving mapping which replaces the actual flow. When the motion on the energy surface E = H(I,θ) is bounded we are able to evaluate physical quantities of interest for large k and fixed ϵ. We also discuss the representation of P (I,θ,t | I0,θ0t0) when an external random forcing is added in order to smooth the singular functions associated with the deterministic flow. Explicit calculations of a {\textquotedblleft}diffusion{\textquotedblright} coefficient are given for a non-integrable system with two degrees of freedom. The limit ϵ {\textrightarrow} 0, which returns us to the actual flow, is subtle and is discussed.

}, isbn = {0167-2789}, doi = {10.1016/0167-2789(82)90025-2}, url = {We study nonintegrable hamiltonian dynamics: H(I,θ) = H0(I) + kH1(I,θ), for large k, that is, far from integrability. An integral representation is given for the conditional probability P(I,θ, t{\textbrokenbar}I0, θ0, t0) that the system is at I, θ at t, given it was at I0, θ0 at t0. By discretizing time into steps of size ϵ, we show how to evaluate physical observables for large k, fixed ϵ. An explicit calculation of a diffusion coefficient in a two degrees of freedom problem is reported. Passage to ϵ = 0, the original hamiltonian flow, is discussed.

}, isbn = {0375-9601}, doi = {10.1016/0375-9601(81)90781-7}, url = {The exact evolution equation for the angle averaged phase space density in action-angle space is derived from the Liouville equation using projection operator techniques. This equation involves a correlation function of the initial value of the phase space density with the angle dependent part of the Hamiltonian and a correlation function of the angle dependent part of the Hamiltonian and a correlation function of the angle dependent part of the Hamiltonian with itself. Each of these correlation functions develops in time with angle projected dynamics. We show their relation to the correlation functions which develop in time with usual Hamiltonian dynamics. These correlation functions are then studied in the standard model of Chirikov, and we conclude that they behave as e-σt cos(Ωt + φ) in regions of irregular motion. We conjecture that angle averaged correlation functions behave this way in general, and we give an argument based on the mixing property of the Hamiltonian system. Our argument goes beyond the usual mixing, so we regard it as a quasi-mixing hypothesis. Under this hypothesis the equation for the angle averaged phase space density becomes a diffusion equation which incorporates much of the non-linear dynamics of Hamiltonian systems exhibiting chaotic motion.

}, isbn = {0167-2789}, doi = {10.1016/0167-2789(81)90006-3}, url = {The author considers perturbation theory in epsilon for the classical Hamiltonian H= H0+epsilon H1, where H0 gives rise to a known motion and epsilon is small. First it is demonstrated how the usual secular terms and small denominators arise from a straightforward expansion in epsilon and argue that they are artifacts of the method. Then an alternative perturbation theory based on an analysis of the operator ( s- L) -1, where s is a complex number and L is the Liouville operator corresponding to H is presented. This perturbation series contains neither secular terms nor small denominators. In the case of almost multiply periodic systems. It is shown to lowest non-trivial order in epsilon, how the series reproduces the standard results both in the resonant and nonresonant regions-all in one analytic formula. As a final exercise it is demonstrated that energy is conserved at order epsilon n+1 when the accuracy of the theory is order epsilon n.

}, isbn = {0167-2789}, doi = {10.1016/0167-2789(80)90028-7}, url = {We give a formulation of the problem of propagation of scalar waves over a random surface. By a judicious choice of variables we are able to show that this situation is equivalent to propagation of these waves through a medium of random fluctuations with fluctuating source and receiver. The wave equation in the new coordinates has an additional term, the fluctuation operator, which depends on derivatives of the surface in space and time. An expansion in the fluctuation operator is given which guarantees the desired boundary conditions at every order. We treat both the cases where the surface is time dependent, such as the sea or surface, or fixed in time. Also discussed is the situation where the source and receiver lie between the random surface and another, possibly also random, surface. In detail we consider acoustic waves for which the surfaces are pressure release. The method is directly applicable to electromagnetic waves and other boundary conditions.

}, isbn = {0001-4966}, doi = {10.1121/1.385113}, url = {We discuss the ambiguities considered by Gribov in the formulation of Coulomb gauge in non-Abelian gauge theories. We review the division of gauge field space into a sector with a unique transverse gauge, a sector with a two-fold ambiguity in transverse gauge, etc. We argue in a semi-classical fashion that transitions between these sectors readily occur and discuss the connection with ideas of quark confinement in Coulomb gauge. Because of these transitions it appears that the functional integral formulation of Coulomb gauge will be rather more complicated than expected in the past.

}, isbn = {0550-3213}, doi = {10.1016/0550-3213(78)90279-1}, url = {It is shown that the standard infrared-cutoff procedures are inconsistent with the general axial gauge in {\textquoteright}t Hooft{\textquoteright}s two-dimensional model of confinement.

}, isbn = {0556-2821}, doi = {10.1103/PhysRevD.15.2275}, url = {In massless field theory with asymptotic freedom a spontaneous mass may be generated through renormalization. This mass is non-analytic in the coupling and via the renormalization group is connected with the usual β function. The development of the β function itself in renormalized perturbation theory provides another connection with the spontaneous mass. In a two-dimensional field theory of four-fermion coupling which is asymptotically free, these connections are exploited to find the infrared behavior of the theory. An argument is made on how the results carry over to all orders of perturbation theory.

}, isbn = {0550-3213}, doi = {10.1016/0550-3213(77)90390-x}, url = {This is a review of the salient features of high energy diffraction scattering of hadrons. It begins with a summary of the experimental situation for those processes which persist at very high energies{\textemdash}the diffractive processess{\textemdash}and defines the underlying exchange mechanism called the Pomeron. A review is made of the key features of the multiperipheral model, since it lies at the beginning of all studies of diffraction. Its virtues and blemishes are exposed. Then we turn to various models which attempt to add unitarity to the multiperipheral model. From the point of view of the direct channel we consider absorptive models, eikonal models, and the multiperipheral bootstrap. The t channel is taken next, and an exposition of the formulation and major results of Reggeon field theory is given. "Et fut une digne parole de Julius Brusus aux ouvriers qui lui offraient pour trois mille {\'e}cus mettre sa maison en tel point que ses voisins n{\textquoteright}y auraient plus la vue qu{\textquoteright}ils y avaient: {\textquoteleft}Je vous en donnerai,{\textquoteright} dit-il, {\textquoteleft}six mille, et faites que chacun y voie de toutes parts.{\textquoteright}" de Montaigne, M. (1585-88), "Du repentir"

}, isbn = {0034-6861}, doi = {10.1103/RevModPhys.48.435}, url = {We consider the viewpoint that Reggeons are built out of Reggeized quarks and that an O(2) symmetry between two kinds of quarks ({\textquotedblleft}heavy{\textquotedblright} and {\textquotedblleft}light{\textquotedblright}) is dynamically broken by their interaction. The Goldstone boson thus generated is associated with the Pomeron, and it is argued that it naturally has a smaller slope than the other Reggeons which are ordinary bound states of the quarks.

}, isbn = {0370-2693}, doi = {10.1016/0370-2693(76)90082-4}, url = {We show that in Reggeon field theory the intercept of the interacting Pomeron must be less than or equal to 1. The intercept is equal to 1 only when the bare intercept has a critical value. When the bare intercept exceeds the critical value a Pomeron field operator acquires a vacuum expectation value, and a discrete symmetry of the Lagrangian is spontaneously broken. It turns out that symmetry breaking is unacceptable even though α(0)\<1. Therefore, Reggeon field theory appears to be valid only when the bare intercept does not exceed the critical value.

}, isbn = {0556-2821}, doi = {10.1103/PhysRevD.14.632}, url = {Reggeon field theory, which enables one to systematically analyze the exchange of Regge poles and associated branch points in high energy hadron scattering is formulated and discussed. The field theory is first motivated by a consideration of hybrid Feynman graphs, and then a more general derivation from crossed-channel multiparticle unitarity relations is given. Rules for Reggeon interaction and propagation are formulated. The problem of the Pomeron or vacuum pole which has alpha(0)=1 and is responsible for diffractive processes is treated in some detail. In particular the renormalization group analysis of Reggeon field theory is presented and the structure of Pomeron partial wave amplitudes is elucidated. Also the question of Pomeron or absorptive corrections to secondary trajectories (both fermion and boson) is considered. Some comments are made on important problems yet remaining in Reggeon field theory; in particular the study of its s-channel content is stressed.

}, url = {We use the Reggeon field-theory rules for inclusive reactions to study those processes in the triple-Regge region. We first show that at asymptotic energies the dominant Reggeon graphs have a single Pomeron connected to external fast particles. We construct the sum of these dominant graphs by obtaining the infrared forms of the Pomeron propagator and triple-Pomeron vertex. This is done by an expanded set of renormalization-group equations which allow one to determine the separate dependencies on all momenta and energies. As a by-product we obtain the momentum-transfer dependence of dσdt in 2{\textrightarrow}2 processes. The inclusive cross section is discussed in detail as to its dependence on momentum transfer and missing mass, and we verify that there is no violation of s-channel unitarity when Pomerons interact among themselves. We also estimate the energy at which our asymptotic forms start to become valid.

}, isbn = {0556-2821}, doi = {10.1103/PhysRevD.12.2798}, url = {We derive rules for evaluating Regge branch-cut corrections to the triple-Regge regime of inclusive reactions. Our approach is to study classes of hybrid field-theory graphs for the six-point function and to set up a constructive procedure for evaluating the contributions with discontinuities in the missing mass. We find that all contributions, cut and pole, can be given in terms of a single partial-wave amplitude and our rules are for constructing that. The rules are then cast into a form appropriate for a Reggeon field-theory evaluation of this partial-wave amplitude, and a renormalization-group attack on the problem is outlined. This latter work is especially relevant for the case when Pomeron exchange is permitted and a nonperturbative evaluation of the partial-wave amplitude near Ji≈1 and ti≈0 is required.

}, isbn = {0556-2821}, doi = {10.1103/PhysRevD.12.2459}, url = {Within the context of a generalized Reggeon calculus we study the infrared J{\textrightarrow}1,t{\textrightarrow}0 limit of a class of models whose "bare" structure arises from elastic amplitudes of the form A(s,t)=s(lns)νJν(a-t---√lns). Such amplitudes are suggested by the implementation of s-channel unitarity via eikonalization of a "Born term," via absorption models, and via the multiperipheral bootstrap. We employ the renormalization group to study the renormalized Pomeranchuk singularity when the interaction involves a triple coupling. Our major result is that for ν=0 these theories are infrared-free. The total cross section behaves as σT(s)\~{}γ1-γ2[(lns)(lnlns)12], where γ1 factorizes. Scaling laws for the Reggeon proper vertex functions are given.

}, isbn = {1550-7998}, doi = {10.1103/PhysRevD.10.1939}, url = {We study the corrections to any secondary Regge trajectory arising from repeated exchange of the Pomeranchuk singularity. Using renormalization-group methods we are able to determine the form of all Pomeron, Reggeon Green{\textquoteright}s functions in the neighborhood of J=α(0)=1 for the Pomerons and J=αR(0)≈12 for the Reggeons. Starting with bare linear trajectories for both Pomerons and Reggeons we establish how these are modified by a triple-Pomeron and a two-Reggeon-Pomeron coupling. In an expansion of the theory around D=4 space dimensions D=2 is where physics takes place), we find three allowed stable points of the renormalization-group equations in the infrared [J{\textrightarrow}1 or αR(0)] limit. For each of these we study the renormalized Reggeon trajectories and the structure of the Green{\textquoteright}s functions.

}, isbn = {0556-2821}, doi = {10.1103/PhysRevD.10.721}, url = {A self-consistency or bootstrap principle is suggested to determine the structure of the Pomeranchuk singularity in the neighborhood of J = 1 and t = 0. The ingredients in this are a Reggeon field theory which we require to be renormalizable and infrared free in the sense of the renormalization group. This guarantees that the input singularity reproduces itself near J=1, t=0 with small computable corrections. Several examples of physical significance are discussed.

}, isbn = {0370-2693}, doi = {10.1016/0370-2693(74)90295-0}, url = {We consider a Reggeon field theory when the bare or input Regge intercept αO is greater than one. This corresponds to a negative mass squared term in conventional field theory and allows for a spontaneous symmetry break-down. A theory with Regge intercept at one emerges, restoring the Froissart bound by t-channel considerations alone. In our elementary example the resulting bare trajectory is nearly of the square root variety familiar from s-channel eikonalization of models which violate the Froissart bound.

}, isbn = {0370-2693}, doi = {10.1016/0370-2693(74)90580-2}, url = {Using the methods of the renormalization group we study the structure of Pomeranchukon Green{\textquoteright}s functions in a Reggeon calculus or a Reggeon-field-theory model. We are able to determine the behavior of all Green{\textquoteright}s functions in the "infrared" limit of small Reggeon momenta and small Reggeon energy (-E = angular momentum minus one). This behavior is governed by a zero of the classic Gell-Mann-Low variety which arises when the triple-Pomeranchukon coupling is pure imaginary as suggested by Gribov{\textquoteright}s analysis of Feynman graphs in ordinary field theory. The form of the Pomeranchukon propagator dictates that the trajectory function be singular at t=0 and that a variety of scaling laws for the Green{\textquoteright}s functions be obeyed. By coupling particles into the theory, we find that total cross sections are predicted to rise as a small power of lns, which in the model is approximately σT(s)\~{}(lns)16.

}, isbn = {0556-2821}, doi = {10.1103/PhysRevD.9.2397}, url = {The methods of the renormalization group are used to analyze the behavior of all Reggeon proper vertex functions in a Reggeon field theory when all angular momenta are near one or all Reggeon momenta are small. This behavior is governed by an infrared stable Gell-Mann-Low zero which arises when the triple Pomeron coupling is imaginary. A renormalized trajectory must be singular at t=0, and a variety of scaling laws for the vertex functions are obeyed. Coupling particles to the Reggeons and using the scaling laws it is found to high accuracy that sigma T( s)~ A(log s) 1/6times[1- B/(log s) 1/2...] where A factorizes.

}, isbn = {0370-2693}, url = {Renormalization-group methods are used to study the Pomeranchuk-Green function near l=1, t=0 in a Reggeon field theory where only four-Pomeranchuk couplings are nonzero. When the renormalization parameters satisfy certain conditions the theory is found to be {\textquoteleft}infrared free) (the effective couplings tend to zero). The renormalized Pomeranchuk trajectory is of the form alpha(t)=1+At+Bt/(lnCt) 5 for t\>0 where A, B, C are constants; the asymptotic form of the total cross section is delta T(s)~gamma 1+gamma 2/(lnlns) 5 where gamma 1, gamma 2 are positive constants.

}, isbn = {0556-2821}, doi = {10.1103/PhysRevD.9.3304}, url = {We show how a simple tmin effect can significantly alter the σn\~{}1n2 rule found in most diffractive models of particle production. Data on pp, π-p, and K-p collisions appear consistent with n2σn\~{}exp(-n4s), which implies that (1) double diffraction dissociation is the dominant feature, and (2) ⟨n⟩\~{}lns, but correlation functions rise much more slowly than in earlier treatments. Critical comments on diffractive models are given.

}, isbn = {0031-9007}, doi = {10.1103/PhysRevLett.30.67}, url = {Assuming that the multiplicity moments fk(s) formed from the topological cross section σn(s) or from integrated inclusive correlation functions behave for large s as fk(s)\~{}cklns, we discuss a general relation between the ck and the leading asymptotic behavior of σn(s). The relation has been given by Harari, but our arguments demonstrate that it is not connected with any hypothesis concerning the dependence of hadronic parameters on some underlying coupling constants.

}, isbn = {0031-9007}, doi = {10.1103/PhysRevLett.30.409}, url = {Theories which attempt to interpret rising total cross- sections such as eikonal models and triple pomeron terms, gauge theories of strong interactions and their consequences for scaling, and dual string models are discussed.

}, isbn = {0094-243X}, url = {By studying the partial-wave expansions of multiparticle amplitudes we argue that analytic properties in complex helicity are just a reflection of the familiar analytic structure in angular momentum. We give a criterion which determines when an asymptotic behavior in an azimuthal angle (conjugate to the helicity) can be reached in a physical process. Our discussion centers around the five- and six-point functions; the latter, being relevant for single-particle inclusive processes, is considered in detail. One of the interesting features of analytic structure in λ is that it depends in detail on what other variables one chooses in addition to the azimuthal angle conjugate to it. That singularity structure is found by examining the partial-wave analysis appropriate to the chosen variables. Finally, a discussion of signature in many-particle amplitudes is given.

}, isbn = {0556-2821}, doi = {10.1103/PhysRevD.6.3018}, url = {The relationship of diffractive dissociation to multiperipheral dynamics is explored within the framework of the ABFST model. After splitting the kernel of the integral equation into {\textquotedblleft}resonance{\textquotedblright} and {\textquotedblleft}Pomeranchuk{\textquotedblright} components, a variational technique is employed to treat the Pomeranchuk component as a small perturbation. The corresponding perturbed structure of the Fredholm determinant is developed in detail, leading to identification of a small dimensionless parameter ηP which measures the strength of the APS branch point and thereby sets the scale of Regge fine structure near J = 1. It is shown that ηP also determines the probability of diffractive dissociation into large masses and thus is subject to experimental determination.

}, isbn = {0003-4916}, doi = {10.1016/0003-4916(72)90317-x}, url = {Integral equations for coupled-particle and Reggeon partial-wave amplitudes are presented. A construction of these equations proceeds from the unitarity relation using the notion of two-Reggeon irreducibility. From these equations, which are appropriate matrix elements of a Lippmann-Schwinger equation in two-dimensional nonrelativistic quantum mechanics, we demonstrate that the discontinuity across the two-Reggeon cut in particle scattering is equal to an integral over Reggeon-particle absorptive parts actually measurable in single-particle inclusive reactions. This provides one with a handle on the magnitude of Regge cuts. Finally we make a little model of coupled Reggeon and particle "states" and solve for the allowed partial-wave amplitudes when a pole and a two-Reggeon cut are close by. This has clear relevance for the physics of diffraction scattering near l=1 and t=0.

}, isbn = {0556-2821}, doi = {10.1103/PhysRevD.6.2788}, url = {To investigate the idea that the transverse "size" of a photon depends on its squared mass, Q2, we examine some simple inclusive experiments involving virtual timelike or spacelike photons. We argue that the average transverse momentum of produced hadrons grows as (||Q2||)12 when those hadrons carry a finite fraction of the photon momentum. The basis for this is the damping in four-momentum transfer common to a variety of field-theory and multiperipheral models. Also, we discuss the decorrelation of average transverse momentum and Q2 as the rapidity between the photon and produced hadron grows. This allows one to test factorization of Regge-pole residues in a single experiment.

}, isbn = {0556-2821}, doi = {10.1103/PhysRevD.5.2050}, url = {We present a Laplace transform procedure for the partial diagonalization of Bethe-Salpeter-like absorptive part equations for the forward scattering of spinning particles. By using Laplace transforms the analysis allows directly for power growth in energy of the absorptive parts of the amplitude without any need for the analytic continuations required in a Fourier analysis. The integral equation with an arbitrary kernel is explicitly diagonalized, resulting in a reduction from a set of four-dimensional equations to coupled one-dimensional equations. As a tool for locating the proper definitions of partial wave amplitudes and for deriving a crucial addition theorem, the necessary representation theory of the little group for forward scattering, SO(1, 3), is described.

}, isbn = {0003-4916}, doi = {10.1016/0003-4916(72)90192-3}, url = {We treat the spin dependences of high-energy inclusive reactions from the viewpoint of a J-plane analysis. Measurements of various spin correlations are shown to bear directly on questions such as factorization of residues at Regge poles and the strengths of J-plane cuts. We provide examples of experiments which test these ideas using polarized beams, polarized targets, or polarized products. Electroproduction and high-energy neutrino reactions are viewed as sources for polarized spin-one "beams," and the implications for the processes are drawn in detail. Finally, we consider scaling properties of these latter reactions and discuss produced-particle multiplicity in the scaling region.

}, isbn = {0556-2821}, doi = {10.1103/PhysRevD.5.699}, url = {We demonstrate that a nonsense wrong signature zero is responsible for the vanishing of the triple Pomeron vertex with all legs at zero mass. The key input is the vanishing of certain fixed pole residues which is enforced by unitarity. Some other consequences of this general feature are drawn.

}, isbn = {0370-2693}, doi = {10.1016/0370-2693(72)90747-2}, url = {The distribution functions for the "inclusive" production of N specified particles plus anything else are treated from a J-plane point of view. The variables relevant to the exhibition of the asymptotic behavior of these distributions are chosen during a group-theoretic discussion of the matrix elements involved. After the variables are located in this fashion, a crossed-channel partial-wave analysis is carried out to exploit the SO(1,3) symmetry of the production cross sections, and in the context of this partial-wave structure the multi-Regge asymptotics are presented. Such features as pionization and limiting fragmentation are treated, as are certain phenomena involving the approach to limiting distributions, including the rate of approach and specific dependences on certain variables related to longitudinal momenta. Single- and double-particle production is treated in detail, and then a set of numerical estimates is made for proton-proton collisions with incident lab momenta of about 200-500 GeVc to give an indication where many of the phenomenological results might be tested. A mathematical appendix is provided for those interested in group theory.

}, isbn = {0556-2821}, doi = {10.1103/PhysRevD.3.2227}, url = {A Laplace transform is developed to perform a crossed channel partial wave analysis of Bethe-Salpeter equations for absorptive parts of nonforward scattering matrix elements involving particles with spin. The method requires no rotation of contours and allows from the outset power growth in energy of the absorptive part. Diagonalization of the equation with an arbitrary kernel is explicitly carried out, thereby reducing it from a four-dimensional equation to a two-dimensional one. The analysis makes continual use of the underlying SO(1, 2) group symmetry in choosing kinematic variables, defining the transform and proving the crucial addition theorem necessary to effect the diagonalization.

}, isbn = {0003-4916}, doi = {10.1016/0003-4916(71)90285-5}, url = {Given the two leading eigenvalues and eigenfunctions of the resonance (low-subenergy) component of a multiperipheral kernel and assuming lower eigenvalues to be unimportant, it is shown how the mixture corresponding to the Pomeranchon eigenfunction may be calculated from considerations of self-consistency. The method is illustrated in a multiperipheral model with pseudoscalar-meson links by associating the two leading unperturbed eigenstates with the 2+ particles f(1260) and f'(1514).

}, isbn = {0556-2821}, doi = {10.1103/PhysRevD.4.2988}, url = {We identify a small dimensionless parameter ηP associated with the triple Pomeranchon vertex, which governs both the rate of high-mass diffractive dissociation processes and the fine structure in the J-plane spectrum near J=1. Theoretical arguments are given that ηP≲1-αP(0), and a possible experiment to measure ηP is discussed. A formula for ηP, based on a multiperipheral model, shows that in such models this parameter does not vanish, and a connection of ηP with a perturbation formalism for the Pomeranchon propagator is suggested.

}, isbn = {0031-9007}, doi = {10.1103/PhysRevLett.26.937}, url = {Polarization phenomena in inclusive reactions are shown to exhibit remarkably simple features from the point of view of J-plane physics with factorizable singularities. In particular, straightforward tests of factorization are proposed in single- and multiple-particle inclusive experiments initiated with polarized photon beams{\textemdash} real and virtual {\textemdash} or polarized proton targets.

}, isbn = {0031-9007}, doi = {10.1103/PhysRevLett.26.732}, url = {A Laplace transform is developed for the crossed-channel partial-wave analysis of Bethe-Salpeter-like equations for the absorptive part of scattering matrix elements. The transform requires no assumption about rotating contours to a Euclidian region and allows from the outset power growth in energy of the transformed absorptive part. This eliminates the need for any analytic continuation after the transformation. The diagonalization of the forward and nonforward equations with arbitrary irreducible kernels is explicitly carried out.

}, isbn = {0556-2821}, doi = {10.1103/PhysRevD.2.711}, url = {The total cross section for meson-meson scattering at high energy is calculated using a simple, but plausible, SU(n)-symmetric multiperipheral model. The resulting σT is of the order 16π3NMV2, where MV is the central mass of the dominant low-energy resonance multiplet in elastic meson-meson scattering and N is the dimensionality of the multiplet of the incident mesons. The nongeometric character of this result is discussed.

}, isbn = {0031-9007}, doi = {10.1103/PhysRevLett.25.1735}, url = {A straightforward realization of a strictly localizable quantum field theory is considered. Some properties of the fields are established. An explicit example of the ability of the {\textquotedblleft}Born approximation{\textquotedblright} in this theory to yield crossing symmetric resonance amplitudes is discussed.

}, isbn = {0003-4916}, doi = {10.1016/0003-4916(70)90365-9}, url = {The joint implications of Bjorken and Regge asymptotics are discussed for the structure function of electroproduction. It is suggested that for q2{\textrightarrow}$\infty$, the function νW2 approaches a constant for large values of ρ=mνq2.

}, isbn = {0031-9007}, doi = {10.1103/PhysRevLett.22.500}, url = {A fully relativistic eikonal expansion is discussed. The connection with the high-energy behavior of elastic scattering, especially for quantum electrodynamics, is examined, and the relevance of the results for Regge asymptotic behavior is investigated.

}, isbn = {0031-9007}, doi = {10.1103/PhysRevLett.23.53}, url = {A suggestion of how to correlate the electromagnetic form factors of the proton with p-p scattering is developed in detail. We postulate a new elementary local interaction of current-current form plus a diffractive term, and construct for p-p scattering an approximately unitary scattering amplitude for large fixed s and all values of t, using the Fourier-Bessel transform of the scattering amplitude. The t dependence of the resulting cross section is closely correlated with the fourth power of the electromagnetic form factor of the proton, as suggested first by Wu and Yang, and agreees well with high-energy data (Elab≈30 BeV) over many decades in values of dσdt. Differences from related models are discussed, as well as further applications and experimental implications of the theory.

}, isbn = {0031-899X}, doi = {10.1103/PhysRev.177.2458}, url = {We discuss the possibility of constructing, out of particle creation and destruction operators, local quantum fields that transform as representations of the homogeneous Lorentz group. Our immediate goal is to write down a consistent local quantum field theory which can simultaneously describe many particles with different masses and spins. In the case that the field is a finite-dimensional irreducible Lorentz tensor, we are able to carry through our program with no restrictions on the masses considered as functions of the spin, provided the usual connection between spin and statistics is satisfied. However, when the field transforms as a unitary irreducible representation of the homogeneous Lorentz group (an infinite-dimensional representation), the requirement of locality, along with the physical assumption that the masses are bounded below, m(j)>=m0\>0, leads to the restriction that the masses are independent of the spin. This property is shown to hold when the transformation law of the field is taken to be an irreducible finite-dimensional representation (x) a unitary irreducible representation. The physical consequences of this result and possible methods for evading it are discussed. Finally, an Appendix is included, where the related problem of orthogonality properties of timelike solutions to infinite-component wave equations is examined. In particular, we show that when the solutions of such wave equations transform as unitary irreducible representations of the homogeneous Lorentz group, only the Majorana representations support a scalar product, which is orthogonal for different spins.

}, isbn = {0031-899X}, doi = {10.1103/PhysRev.171.1442}, url = {A derivation of low-energy theorems for Compton scattering from spin-0 and spin-{\textonehalf} targets is given within the framework of dispersion theory. We work exclusively with physical helicity amplitudes and utilize the zeros of these amplitudes forced by angular momentum conservation to write unsubtracted dispersion relations. The conventional requirement of gauge invariance is replaced in our work by Lorentz in-variance together with the knowledge that the photon is a massless spin-1 particle. From the dispersion relations we extract a number of sum rules of the superconvergence type, one example of which reduces the Drell-Hearn result in the forward direction.

}, isbn = {0031-899X}, doi = {10.1103/PhysRev.165.1594}, url = {Through the use of the Gell-Mann algebra of currents and the current-current form for the nonleptonic weak Lagrangian, we express certain combinations of s-wave hyperon decay amplitudes in terms of integrals over the absorptive parts of the forward {\textquotedblleft}scattering amplitude{\textquotedblright} of a weak current on baryons. Employing the technique of Cottingham we show that by a rotation of the contour of integration these absorptive parts are related to functions directly measurable in high-energy neutrino reactions. The assumed correctness of certain unsubtracted dispersion relations then allows an experimental test of the current-current nature of the nonleptonic weak interactions.

}, isbn = {0003-4916}, doi = {10.1016/0003-4916(68)90246-7}, url = {A set of graphical rules is established for writing down all the contributions to the hadron process α {\textrightarrow} β + n soft pions dictated by the algebra of current commutators of Gell-Mann and PCAC. Explicit results are given for n = 3. An example of n = 4 is treated in the form of soft π-π scattering, where the small s-wave scattering lengths found by Weinberg are rederived.

}, isbn = {0003-4916}, doi = {10.1016/0003-4916(67)90136-4}, url = {We show that the presence of a fixed pole at J=1 in the process γ+γ{\textrightarrow}h+h{\textasciimacron} reinstates the coupling of the γ-γ state to the vacuum trajectory, and hence permits a finite total photon cross section as s{\textrightarrow}$\infty$.

}, isbn = {0031-899X}, doi = {10.1103/PhysRev.160.1329}, url = {Properties of the crossing matrix for helicity amplitudes are enumerated and then exploited in a simple Regge-pole model of photon-hadron interactions. We find that: (1) The vacuum or Pomeranchuk trajectory is absent in forward Compton scattering of real photons on nucleons or pions. Thus the total nuclear cross section for photons or nucleons should vanish asymptotically. (2) The amplitudes entering the forward spin-flip Compton scattering of virtual photons on protons, which are crucial in the proton structure contributions to the ground-state hyperfine splitting (hfs) in hydrogen, may be chosen so that they do not require a subtraction when writing dispersion relations for them in the energy variable. One must look elsewhere than to the high-energy behavior of the amplitudes for the erasure of the discrepancy between theory and experiment on the hfs. (3) In forward photoproduction of vector mesons on protons, the polarization of the mesons should be predominantly longitudinal at "high energy." (4) In the differential cross section dσdt for photoproduction of neutral pions, there should be a dip at t≈-0.5 BeV2 because of a nonsense zero in all crossed-channel helicity amplitudes.

}, isbn = {0031-899X}, doi = {10.1103/PhysRev.158.1462}, url = {Using the partially conserved axial-vector current and the commutation relations of the vector and axial-vector currents with themselves and with the Hamiltonian responsible for weak nonleptionic decays, we demonstrate how to expand weak-decay amplitudes in the momenta of pions involved in the decays. The resulting expansion gives the on-mass-shell decay amplitude up to first order in pion momenta. We apply these techniques to the three-pion decays of kaons and show that they quantitatively reproduce the experimental situation of an amplitude linearly dependent on the "odd"-pion kinetic energy. All three pions are treated on an equal footing, and on-the-mass-shell quantities are dealt with everywhere.

}, isbn = {0031-899X}, doi = {10.1103/PhysRev.153.1547}, url = {The magnetic moments of the neutron and proton are calculated within the framework of the S-matrix perturbation theory recently developed by Dashen and Frautschi. In the present context, this method expresses the magnetic moments in terms of a dispersion integral involving photopion production. Evaluation of this integral in terms of contributions from appropriate low-mass intermediate states yields results for the individual magnetic moments which are larger than the experimental values by about a factor of two. The calculation does, however, give an approximately correct value for the ratio of the isovector moment to the isoscalar moment, and a value for the isoscalar moment that agrees with the experimental value to within about a factor of two.

}, isbn = {0031-899X}, doi = {10.1103/PhysRev.143.1225}, url = {Dispersion theoretic perturbation techniques originally developed by Dashen and Frautschi for the discussion of electromagnetic corrections to the strong interactions are used in evaluating the Lamb shift in hydrogen-like atoms.We first indicate how to determine the Lamb shift to any order in the binding strength of the Coulomb potential (Zα) and the perturbation by the radiation field (α). All infrared divergences arising from the Coulomb potential are shown to cancel in a straightforward manner. All infrared divergences of the perturbation are handled in a dispersion theoretic manner. Explicit formulas are given for the lowest order, α(Zα)4me, Lamb shift for spin zero and spin one-half electrons. A real numerical evaluation of this shift is not possible because of divergences indicated in the text; however, some fairly simple numerical estimates are made which indicate that the formulas derived in the text give the cor ect sign and order of magnitude of the level shift. Defects and virtues of the dispersion perturbation formulae are discussed.A short appendix on the possible treatment of the hyperfine structure of hydrogen by the same techniques concludes the work.

}, isbn = {0003-4916}, doi = {10.1016/0003-4916(66)90056-x}, url = {The perturbation series of Dashen and Frautschi is derived via a "D-matrix" approach to all orders in the perturbation. Explicit formulas are given for the first and second orders, and their equivalence to the Rayleigh-Schr{\"o}dinger perturbation series in potential theory is shown. A method is then given to carry out the evaluation of the series to any order. The first-order formula is examined in three models; the four-point interaction, two-particle scattering with a left-hand cut given by a single pole, and an augmented Lee model. The exact results of these models are shown to agree in each case with the theory of Dashen and Frautschi. Finally, we discuss differences between the Lee-model calculation presented here and Dashen{\textquoteright}s calculation of the neutron-proton mass difference.

}, isbn = {0031-899X}, doi = {10.1103/PhysRev.144.1355}, url = {We discuss some conventional aspects of certain model theories based both on traditional field equations and on previous solutions obtained by unconventional means. We show that the conventional approach, as the cutoff necessary to define it is removed, cannot reproduce the no-cutoff results of the unconventional approach, nor can any meaningful limit emerge unless the interaction vanishes. The no-cutoff results permit explicit solutions in the lowest "sectors" of Hilbert space and allow for a discussion of renormalization constants, two-point functions, and analyticity of the eigenvalues in the coupling constants: that is, the validity of a certain perturbation expansion. The presence of two asymptotic "one-particle" states is noted and discussed. Finally, we comment briefly on the failure of the conventional field equations, and the relevance of reducible representations of the canonical commutation relations for problems in quantum field theory.

}, isbn = {0031-899X}, doi = {10.1103/PhysRev.152.1198}, url = {