Up  Next  Prev  PrevTail  Tail 
We describe CDE, a REDUCE package devoted to differentialgeometric computations on Differential Equations (DEs, for short).
We will give concrete recipes for computations in the geometry of differential equations: higher symmetries, conservation laws, Hamiltonian operators and their Schouten bracket, recursion operators. All programs discussed here are shipped together with the CDE sources, inside the REDUCE sources. The mathematical theory on which computations are based can be found in refs. [?, ?]. We invite the interested reader to have a look at the website [?] which contains useful reseources in the above mathematical area. A book on integrable systems and CDE is currently being written [?] with more examples and more detailed explanations about the mathematical part.
CDE is a REDUCE package for differentialgeometric computations for DEs. The package aims at defining differential operators in total derivatives and computing with them. Such operators are called differential operators (see [?]).
CDE depends on the REDUCE package CDIFF for constructing total derivatives. CDIFF was developed by Gragert and Kersten for symmetry computations in DEs, and later extended by Roelofs and Post.
There are many software packages that can compute symmetries and conservation laws; many of them run on Mathematica or Maple. Those who run on REDUCE were written by M.C. Nucci [?, ?], F. Oliveri (RELIE, [?]), F. Schwartz (SPDE, REDUCE official distribution) T. Wolf (APPLYSYM and CONLAW in the official REDUCE distribution, [?, ?, ?, ?]).
The development of CDE started from the idea that a computer algebra tool for the investigation of integrabilityrelated structures of PDEs still does not exist in the public domain. We are only aware of a Mathematica package that may find recursion operators under quite restrictive hypotheses [?].
CDE is especially designed for computations of integrabilityrelated structures (such as Hamiltonian, symplectic and recursion operators) for systems of differential equations with an arbitrary number of independent or dependent variables. On the other hand CDE is also capable of (generalized) symmetry and conservation laws computations. The aim of this guide is to introduce the reader to computations of integrability related structures using CDE.
The current version of CDE, 2.0, has the following features:
At the moment the papers [?, ?, ?, ?, ?, ?] have been written using CDE, and more research by CDE on integrable systems is in progress.
The readers are warmly invited to send questions, comments, etc., both on the computations and on the technical aspects of installation and configuration of REDUCE, to the author of this document.
Acknowledgements. I’d like to thank Paul H.M. Kersten, who explained to me how to use the original CDIFF package for several computations of interest in the Geometry of Differential Equations. When I started writing CDE I was substantially helped by A.C. Norman in understanding many features of Reduce which were deeply hidden in the source code and not well documented. This also led to writing a manual of Reduce’s internals for programmers [?]. Moreover, I’d like to thank the developers of the REDUCE mailing list for their prompt replies with solutions to my problems. On the mathematical side, I would like to thank J.S. Krasil’shchik and A.M. Verbovetsky for constant support and stimulating discussions which led me to write the software. Thanks are also due to B.A. Dubrovin, M. Casati, E.V. Ferapontov, P. Lorenzoni, M. Marvan, V. Novikov, A. Savoldi, A. Sergyeyev, M.V. Pavlov for many interesting discussions.
The mathematical theory for jets of even (ie standard) variables and total derivatives can be found in [?, ?].
Let us consider the space ℝ^{n} × ℝ^{m}, with coordinates (x^{λ},u^{i}), 1 ≤ λ ≤ n, 1 ≤ i ≤ m. We say x^{λ} to be independent variables and u^{i} to be dependent variables. Let us introduce the jet space J^{r}(n,m). This is the space with coordinates (x^{λ},u_{σ}^{i}), where u_{σ}^{i} is defined as follows. If s: ℝ^{n} → ℝ^{m} is a differentiable function, then
Here σ = (σ_{1},…,σ_{n}) ∈ ℕ^{n} is a multiindex. We set σ = σ_{1} + + σ_{n}. If σ = (0,…,0) we set u_{σ}^{i} = u^{i}.
CDE is first of all a program which is able to create a finite order jet space inside REDUCE. To this aim, issue the command
Then, CDE needs to know the variables and the maximal order of derivatives. The input can be organized as in the following example:
Here
Two more parameters can be set for convenience:
These are the name of the output file for recording the internal state of the program cde.red (and for debugging purposes), and the name of the file containing results of the computation.
The main routine in cde.red is called as follows:
Here the two empty lists are placeholders; they are of interest for computations with odd variables/differential equations. The function cde defines derivative symbols of the type:
Note that the symbol v_tx does not exist in the jet space. Indeed, introducing all possible permutations of independent variables in indices would increase the complexity and slow down every computation.
Two lists generated by CDE can be useful: all_der_id and all_odd_id, which are, respectively, the lists of identifiers of all even and odd variables.
Other lists are generated by CDE, but they are accessible in REDUCE symbolic mode only. Please check the file global.txt to know the names of the lists.
It can be useful to inspect the output generated by the function cde and the above lists in particular. All that data can be saved by the function:
CDE has a few procedures involving the jet space, namely:
The function cde defines total derivatives truncated at the order total_order. Their coordinate expressions are of the form
 (16.46) 
where σ is a multiindex.
The total derivative of an argument φ is invoked as follows:
the syntax closely follows REDUCE’s syntax for standard derivatives df; the above expression translates to D_{x}D_{x}φ, or D_{{2,0}}φ in multiindex notation.
When in total derivatives there is a coefficient of order higher than maximal this is replaced by the identifier letop, which is a function that depends on independent variables. If such a function (or its derivatives) appears during computations it is likely that we went too close to the highest order variables that we defined in the file. All results of computations are scanned for the presence of such variables by default, and if the presence of letop is detected the computation is stopped with an error message. This usually means that we need to extend the order of the jet space, just by increasing the number total_order.
Note that in the folder containing all examples there is also a shell script, rrr.sh (works only under bash, a GNU/Linux command interpreter) which can be used to run reduce on a given CDE program. When an error message about letop is issued the script reruns the computation with a new value of total_order one unity higher than the previous one.
The function that checks an expression for the presence of letop is check_letop. If you wish to switch off this kind of check in order to increase the speed, the switch checkord must be set off:
The computation of total derivatives of a huge expression can be extremely time and resources consuming. In some cases it is a good idea to disable the expansion of the total derivative and leave an expression of the type D_{σ}φ as indicated. This is achieved by the command
If you wish to restore the default behaviour, do
CDE can also compute on jets of supermanifolds. The theory can be found in [?, ?, ?]. The input can be organized as follows:
Here odd_var is the list of odd variables. The call
will create the jet space of the supermanifold described by the independent variables and the even and odd dependent variables, up to the order total_order. Total derivatives truncated at the order total_order will also include odd derivatives:
 (16.47) 
where σ is a multiindex. The considerations on expansion and letop apply in this case too.
Odd variables can appear in anticommuting products; this is represented as
where ext(p_2xt,p) =  ext(p,p_2xt) and the variables are arranged in a unique way terms of an internal ordering. Indeed, the internal representation of odd variables and their products (not intended for normal users!) is
as all odd variables and their derivatives are indexed by integers. Note that p and ext(p) are just the same. The odd product of two expressions φ and ψ is achieved by the CDIFF function
The derivative of an expression φ with respect to an odd variable p is achieved by
We now give the equation in the form of one or more derivatives equated to righthand side expressions. The lefthand side derivatives are called principal, and the remaining derivatives are called parametric^{8} . Parametric coordinates are coordinates on the equation manifold and its differential consequences, and principal coordinates are determined by the differential equation and its differential consequences. For scalar evolutionary equations with two independent variables parametric derivatives are of the type (u,u_{x},u_{xx},…). Note that the system must be in passive orthonomic form; this also means that there will be no nontrivial integrability conditions between parametric derivatives. (Lines beginning with % are comments for REDUCE.) The input is formed as follows (Burger’s equation).
Systems of PDEs are input in the same way: of course, the above two lists must have the same length. See 16.12.17 for an example.
The main routine in cde.red is called as follows:
Here the three empty lists are placeholders; they are important for computations with odd variables. The function cde computes principal and parametric derivatives of even and odd variables, they are stored in the lists all_parametric_der, all_principal_der, all_parametric_odd, all_principal_odd.
The function cde also defines total derivatives truncated at the order total_order and restricted on the (even and odd) equation; this means that total derivatives are tangent to the equation manifold. Their coordinate expressions are of the form
 (16.48) 
where σ is a multiindex. It can happen that u_{σλ}^{i} (or p_{σλ}^{i}) is principal and must be replaced with differential consequences of the equation. Such differential consequences are called primary differential consequences, and are computed; in general they will depend on other, possibly new, differential consequences, and so on. Such newly appearing differential consequences are called secondary differential consequences. If the equation is in passive orthonomic form, the system of all differential consequences (up to the maximal order total_order) must be solvable in terms of parametric derivatives only. The function cde automatically computes all necessary and sufficient differential consequences which are needed to solve the system. The solved system is available in the form of REDUCE letrules in the variables repprincparam_der and repprincparam_odd.
The syntax and properties (expansion and letop) of total derivatives remain the same. For exmaple:
returns
It is possible to deal with mixed systems on eve and odd variables. For example, in the case of Burgers equation we can input the linearized equation as a PDE on a new odd variable as follows (of course, in addition to what has been defined before):
The main routine in cde.red is called as follows:
CDE can compute variational derivatives of any function (usually a Lagrangian density) or superfunction . We have the following coordinate expression
 (16.49) 
which translates into the CDE commands
The Euler operator computes variational derivatives with respect to all even and odd variables in the jet space, and arranges them in a list of two lists, the list of even variational derivatives and the list of odd variational derivatives. The command is
All the above is used in the definition of Schouten brackets, as we will see in Subsection 16.12.7.
Linearizing (or taking the Fréchet derivative) of a vector function that defines a differential equation yields a differential operator in total derivatives. This operator can be restricted to the differential equation, which may be regarded as a differential constraint; the kernel of the restricted operator is the space of all symmetries (including higher or generalized symmetries) [?, ?].
The formal adjoint of the linearization operator yields by restriction to the corresponding differential equation a differential operator whose kernel contains all characteristic vectors or generating functions of conservation laws [?, ?].
Such operators are examples of differential operators. The (still incomplete) REDUCE implementation of the calculus of differential operators is the subject of this section.
Let us consider the spaces
A differential operator Δ: P → Q is defined to be a map of the type
 (16.50) 
where a_{i}^{σj} are differentiable functions on J^{r}(n,m), 1 ≤ i ≤ k, 1 ≤ j ≤ s. The order of δ is the highest length of σ in the above formula.
We may consider a generalization to kdifferential operators of the type
(16.51) 
A differential operator in CDE must be declared as follows:
where
The syntax for one component of the operator opname is
The above operator will compute
 (16.52) 
for fixed integer indices i_{1},…,i_{h} and j.
There are several operations which involve differential operators. Obviously they can be summed and multiplied by scalars.
An important example of differential operator is that of linearization, or Fréchet derivative, of a vector function
This is the operator
where ϰ = {φ: J^{r}(n,m) → ℝ^{m}} is the space of generalized vector fields on jets [?, ?].
Linearization can be extended to an operation that, starting from a kdifferential operator, generates a k + 1differential operator as follows:
(The above operation is also denoted by ℓ_{Δ,p1,…,pk}(φ).)
At the moment, CDE is only able to compute the linearization of a vector function (Section 16.12.9).
Given a differential operator Δ like in (16.50) we can define its adjoint as
 (16.53) 
Note that the matrix of coefficients is transposed. Again, the coefficients of the adjoint operator can be found by computing Δ^{*}(x^{σ}e_{j}) for every basis vector e_{j} and every count x^{σ}, where σ≤ r, and r is the order of the operator. This operation can be generalized to differential operators with h arguments.
At the moment, CDE can compute the adjoint of an operator with one argument (Section 16.12.9).
Now, consider two operators Δ: P → Q and ∇: Q → R. Then the composition ∇∘ Δ is again a differential operator. In particular, if
then
This operation can be generalized to differential operators with h arguments.
There is another important operation between differential operators with h arguments: the Schouten bracket [?]. We will discuss it in next Subsection, in the context of another formalism, where it takes an easier form [?].
In the papers [?, ?] (and independently in [?]) a scheme for dealing with (skewadjoint) variational multivectors was devised. The idea was that operators of the type (16.51) could be represented by homogeneous vector superfunctions on a supermanifold, where odd coordinates q_{σ}^{i} would correspond to total derivatives D_{σ}φ^{i}.
The isomorphism between the two languages is given by
 (16.54) 
where q_{σ}^{i} is the derivative of an odd dependent variable (and an odd variable itself).
A superfunction in CDE must be declared as follows:
where
The above parameters of the operator opname are stored in the property list^{9} of the identifier opname. This means that if one would like to know how many arguments has the operator opname the answer will be the output of the command
and the same for the other parameters.
The syntax for one component of the superfunction sfname is
CDE is able to deal with differential operators in both formalisms, and provides conversion utilities:
where in the first case a differential operator cdop is converted into a vector superfunction superfun with the same properties, and conversely.
We are interested in the operation of Schouten bracket between variational multivectors [?]. These are differential operators with h arguments in ϰ with values in densities, and whose image is defined up to total divergencies:
(16.55) 
 (16.56) 
where δ∕δui, δ∕δpj are the variational derivatives and the square brackets at the righthand side should be understood as the equivalence class up to total divergencies.
If the operators A_{1}, A_{2} are compatible, ie [A_{1},A_{2}] = 0, the expression (16.56) must be a total derivative. This means that:
 (16.57) 
If A_{1} is an hvector and A_{2} is a kvector the formula (16.56) produces a (h + k  1)vector, or a differential operator with h + k  1 arguments. If we would like to check that this multivector is indeed a total divergence, we should apply the Euler operator, and check that it is zero. This procedure is considerably simpler than the analogue formula with operators (see for example [?]). All this is computed by CDE:
where biv1 and biv2 are bivectors, or differential operators with 2 arguments, and tv12 is the result of the computation, which is a threevector (it is automatically declared to be a superfunction). Examples of this computation are given in Section 16.12.19.
Currently, CDE supports linearization of a vector function, or a differential operator with 0 arguments. The computation is performed in odd coordinates.
Suppose that we would like to linearize the vector function that defines the (dispersionless) Boussinesq equation [?]:
 (16.58) 
where σ is a constant. Then a jet space with independent variables x,t, dependent variables u,v and odd variables in the same number as dependent variables p,q must be created:
The linearization of the above system and its adjoint are, respectively
ℓ_{Bou}  = ,  
ℓ_{Bou}^{*}  = 
The following command assigns to the identifier lbou the linearization differential operator ℓ_{Bou} of the vector function f_bou
moreover, a superfunction lbou_sf is also defined as the vector superfunction corresponding to ℓ_{Bou}. Indeed, the following sequence of commands:
shows the vector superfunction corresponding to ℓ_{Bou}. To compute the value of the (1,1) component of the matrix ℓ_{Bou} applied to an argument psi do
In order to check that the result is correct one could define the linearization as a differential operator and then check that the corresponding superfunctions are the same:
the result of the two last commands must be zero.
The formal adjoint of lbou can be computed and assigned to the identifier lbou_star by the command
Again, the associated vector superfunction lbou_star_sf is computed, with values
Again, the above operator can be checked for correctness.
Once the linearization and its ajdoint are computed, in order to do computations with symmetries and conservation laws such operator must be restricted to the corresponding equation. This can be achieved with the following steps:
Only the last step needs to be explained. If we are considering, eg the Boussinesq equation, then u_{t} and its differential consequences (ie the principal derivatives) are not automatically expanded to the righthand side of the equation and its differential consequences. At the moment this step is not fully automatic. More precisely, only principal derivatives which appear as coefficients in total derivatives can be replaced by their expression. The lists of such derivatives with the corresponding expressions are repprincparam_der and repprincparam_odd (see Section 16.12.3). They are in the format of REDUCE’s replacement list and can be used in letrules. If the linearization or its adjoint happen to depend on another principal derivative this must be computed separately. A forthcoming release of REDUCE will automatize this procedure.
However, note that for evolutionary equations this step is trivial, as the restriction of linearization and its adjoint on the given PDE will only affect total derivatives which are restricted by CDE to the PDE.
In this section we show the computation of (some) higher [?] (or generalized, [?]) symmetries of Burgers’equation B = u_{t}  u_{xx} + 2uu_{x} = 0.
We provide two ways to solve the equations for higher symmetries. The first possibility is to use dimensional analysis. The idea is that one can use the scale symmetries of Burgers’equation to assign “gradings” to each variable appearing in the equation (in other words, one can use dimensional analisys). As a consequence, one could try different ansatz for symmetries with polynomial generating functions. For example, it is possible to require that they are sum of monomials of given degrees. This ansatz yields a simplification of the equations for symmetries, because it is possible to solve them in a “graded” way, i.e., it is possible to split them into several equations made by the homogeneous components of the equation for symmetries with respect to gradings.
In particular, Burgers’equation translates into the following dimensional equation:
By the rules [u_{z}] = [u]  [z] and [uv] = [u] + [v], and choosing [x] = 1, we have [u] = 1 and [t] = 2. This will be used to generate the list of homogeneous monomials of given grading to be used in the ansatz about the structure of the generating function of the symmetries.
The file for the above computation is bur_hsy1.red and the results of the computation are in results/bur_hsy1_res.red.
Another possibility to solve the equation for higher symmetries is to use a PDE solver that is especially devoted to overdetermined systems, which is the distinguishing feature of systems coming from the symmetry analysis of PDEs. This approach is described below. The file for the above computation is bur_hsy2.red and the results of the computation are in results/bur_hsy2_res.red.
After loading CDE:
Here the new lists are scale degrees:
We now give the equation and call CDE:
Higher symmetries of the given equation are functions sym depending on parametric coordinates up to some jet space order. We assume that they are graded polynomials of all parametric derivatives. In practice, we generate a linear combination of graded monomials with arbitrary coefficients, then we plug it in the equation of the problem and find conditions on the coefficients that fulfill the equation. To construct a good ansatz, it is required to make several attempts with different gradings, possibly including independent variables, etc.. For this reason, ansatzconstructing functions are especially verbose. In order to use such functions they must be initialized with the following command:
Note the empty list at the end; it playe a role only for computations involving odd variables.
We need one operator equ whose components will be the equation of higher symmetries and its consequences. Moreover, we need an operator c which will play the role of a vector of constants, indexed by a counter ctel:
We prepare a list of variables ordered by scale degree:
The function der_deg_ordering is defined in cde.red. It produces the given list using the list all_parametric_der of all parametric derivatives of the given equation up to the order total_order. The first two parameters can assume the values 0 or 1 and say that we are considering even variables and that the variables are of parametric type.
Then, due to the fact that all parametric variables have positive scale degree then we prepare the list ansatz of all graded monomials of scale degree from 0 to 5
More precisely, the command graded_mon produces a list of monomials of degrees from i to j, formed from the list of graded variables l_grad_var; the second command adds the zerodegree monomial; and the last command produces a single list of all monomials.
Finally, we assume that the higher symmetry is a graded polynomial obtained from the above monomials (so, it is independent of x and t!)
Next, we define the equation ℓ_{B}(sym) = 0. Here, ℓ_{B} stands for the linearization (Section 16.12.9). A function sym that fulfills the above equation, on account of B = 0, is an higher symmetry.
We cannot define the linearization as a differential operator in this way:
as the linearization is performed with respect to parametric derivatives only! This means that the linearization has to be computed beforehand in a free jet space, then it may be used here.
So, the right way to go is
Note that for evolutionary equations the restriction of the linearization to the equation is equivalent to just restricting total derivatives, which is automatic in CDE.
The equation becomes
At this point we initialize the equation solver. This is a part of the CDIFF package called integrator.red (see the original documentation inside the folder packages/cdiff in REDUCE’s source code). In our case the above package will solve a large sparse linear system of algebraic equations on the coefficients of sym.
The list of variables, to be passed to the equation solver:
The number of initial equation(s):
Next command initializes the equation solver. It passes
Run the procedure splitvars_opequ on the first component of equ in order to obtain equations on coefficiens of each monomial.
Note that splitvars_opequ needs to know the indices of the first and the last equation in equ, and here we have only one equation as equ(1). The output tel is the final number of splitted equations, starting just after the initial equation equ(1).
Next command tells the solver the total number of equations obtained after running splitvars.
This command solves the equations for the coefficients. Note that we have to skip the initial equations!
The output is written in the result file by the commands
The command off nat turns off writing in natural notation; results in this form are better only for visualization, not for writing or for input into another computation. The command «resname» forces the evaluation of the variable resname to its string value. The commands out and shut are for file opening and closing. The command sym:=sym is evaluated only on the righthand side.
One more example file is available; it concerns higher symmetries of the KdV equation. In order to deal with symmetries explicitely depending on x and t it is possible to use REDUCE and CDE commands in order to have sym = x*(something of degree 3) + t*(something of degree 5) + (something of degree 2); this yields scale symmetries. Or we could use sym = x*(something of degree 1) + t*(something of degree 3) + (something of degree 0); this yields Galilean boosts.
CRACK is a PDE solver which is devoted mostly to the solution of overdetermined PDE systems [?, ?]. Several mathematical problems have been solved by the help of CRACK, like finding symmetries [?, ?] and conservation laws [?]. The aim of CDE is to provide a tool for computations with total derivatives, but it can be used to compute symmetries too. In this subsection we show how to interface CDE with CRACK in order to find higher (or generalized) symmetries for the Burgers’equation. To do that, after loading CDE and introducing the equation, we define the linearization of the equation lbur.
We introduce the new unknown function ‘ansatz’. We assume that the function depends on parametric variables of order not higher than 3. The variables are selected by the function selectvars of CDE as follows:
In the arguments of selectvars, 0 means that we want even variables, i stands for the order of variables, dep_var stands for the dependent variables to be selected by the command (here we use all dependent variables), all_parametric_der is the set of variables where the function will extract the variables with the required properties. In the current example we wish to get all higher symmetries depending on parametric variables of order not higher than 3.
The dependency of ansatz from the variables is given with the standard REDUCE command depend:
The equation to be solved is the equation lbur(ansatz)=0, hence we give the command
The above command will issue an error if the list {total_eq} depends on the flag variable letop. In this case the computation has to be redone within a jet space of higher order.
The equation ell_b(ansatz)=0 is polynomial with respect to the variables of order higher than those appearing in ansatz. For this reason, its coefficients can be put to zero independently. This is the reason why the PDEs that determine symmetries are overdetermined. To tell this to CRACK, we issue the command
The list split_vars contains variables which are in the current CDE jet space but not in even_vars.
Then, we load the package CRACK and get results.
The results are in the variable crack_results:
So, we have three symmetries; of course the generalized symmetry corresponds to c_8. Remember to check always the output of CRACK to see if any of the symbols c_n is indeed a free function depending on some of the variables, and not just a constant.
In this section we will find (some) local conservation laws for the KdV equation F = u_{t}  u_{xxx} + uu_{x} = 0. Concretely, we have to find nontrivial 1forms f = f_{x}dx + f_{t}dt on F = 0 such that f = 0 on F = 0. “Triviality” of conservation laws is a delicate matter, for which we invite the reader to have a look in [?].
The files containing this example are kdv_lcl1,kdv_lcl2 and the corresponding results and debug files.
We suppose that the conservation law has the form ω = f_{x}dx + f_{t}dt. Using the same ansatz as in the previous example we assume
Next we define the equation (ω) = 0, where is the total exterior derivative restricted to the equation.
After solving the equation as in the above example we get
Unfortunately it is clear that the conservation law corresponding to c(3) is trivial, because it is just the KdV equation. Here this fact is evident; how to get rid of less evident trivialities by an ‘automatic’ mechanism? We considered this problem in the file kdv_lcl2, where we solved the equation
after having loaded the values fx and ft found by the previous program. In order to do that we have to introduce two new counters:
We make the following ansatz on f0:
After solving the system, issuing the commands
we obtain
This mechanism can be easily generalized to situations in which the conservation laws which are found by the program are difficult to treat by pen and paper. However, we will present another approach to the computation of conservation laws in subsection 16.12.26.
In this section we will show how to compute local Hamiltonian operators for Korteweg–de Vries, Boussinesq and Kadomtsev–Petviashvili equations. It is interesting to note that we will adopt the same computational scheme for all equations, even if the latter is not in evolutionary form and it has more than two independent variables. This comes from a new mathematical theory which started in [?] for evolution equations and was later extended to general differential equations in [?].
Here we will find local Hamiltonian operators for the KdV equation u_{t} = u_{xxx} + uu_{x}. A necessary condition for an operator to be Hamiltonian is that it sends generating functions (or characteristics, according with [?]) of conservation laws to higher (or generalized) symmetries. As it is proved in [?], this amounts at solving _{KdV }(phi) = 0 over the equation
or, in geometric terminology, find the shadows of symmetries on the ℓ^{*}covering of the KdV equation, with the further condition that the shadows must be linear in the pvariables. Note that the second equation (in odd variables!) is just the adjoint of the linearization of the KdV equation applied to an odd variable.
The file containing this example is kdv_lho1.
We stress that the linearization (phi) = 0 is the equation
_{KdV }but the total derivatives are lifted to the ℓ^{*} covering, hence they contain also derivatives with respect to p’s. We can define a linearization operator lkdv as usual.
In order to produce an ansatz which is a superfunction of one odd variable (or a linear function in odd variables) we produce two lists: the list l_grad_var of all even variables collected by their gradings and a similar list l_grad_odd for odd variables:
We need a list of graded monomials which are linear in odd variables. The function mkalllinodd produces all monomials which are linear with respect to the variables from l_grad_odd, have (monomial) coefficients from the variables in l_grad_var, and have total scale degrees from 1 to 6. Such monomials are then converted to the internal representation of odd variables.
Note that all odd variables have positive scale degrees thanks to our initial choice deg_odd_var:=1;. Finally, the ansatz for local Hamiltonian operators:
After having set
and having initialized the equation solver as before, we do splitext
in order to split the polynomial equation with respect to the ext variables, then splitvars
in order to split the resulting polynomial equation in a list of equations on the coefficients of all monomials.
Now we are ready to solve all equations:
Note that we want all equations to be solved!
The results are the two wellknown Hamiltonian operators for the KdV. After integration the function sym becomes
Of course, the results correspond to the operators
Note that each operator is multiplied by one arbitrary real constant, c(5) and c(2).The same problem can be approached using CRACK, as follows (file kdv_lho2.red). An ansatz is constructed by the following instructions:
Note that we have
Indeed, we are looking for a thirdorder operator whose coefficients depend on variables of order not higher than 3. This last property has to be introduced by
Then, we introduce the linearization (lifted on the cotangent covering)
and the equation to be solved, together with the usual test that checks for the nedd to enlarge the jet space:
Finally, we split the above equation by collecting all coefficients of odd variables:
and we feed CRACK with the equations that consist in asking to the above coefficients to be zero:
The results are the same as in the previous section:
There is no conceptual difference when computing for systems of PDEs with respect to the previous computations for scalar equations. We will look for Hamiltonian structures for the dispersionless Boussinesq equation (16.58).
We will proceed by dimensional analysis. Gradings can be taken as
where p, q are the two odd coordinates. We have the ℓ_{Bou}^{*} covering equation
We have to find Hamiltonian operators as shadows of symmetries on the above covering. At the level of source file (bou_lho1) the input data is:
The ansatz for the components of the Hamiltonian operator, of scale degree between 1 and 6, is
and the equation for shadows of symmetries is (lbou2 is taken from Section 16.12.9)
After the usual procedures for decomposing polynomials we obtain three local Hamiltonian operators:
There is a whole hierarchy of nonlocal Hamiltonian operators [?].
There is no conceptual difference in symbolic computations of Hamiltonian operators for PDEs in 2 independent variables and in more than 2 independent variables, regardless of the fact that the equation at hand is written in evolutionary form. As a model example, we consider the KP equation
 (16.59) 
Proceeding as in the above examples we input the following data:
and look for Hamiltonian operators of scale degree between 1 and 5:
After solving the equation for shadows of symmetries in the cotangent covering
we get the only local Hamiltonian operator
As far as we know there are no further local Hamiltonian operators.
Remark: the above Hamiltonian operator is already known in an evolutionary presentation of the KP equation [?]. Our mathematical theory of Hamiltonian operators for general differential equations [?] allows us to formulate and solve the problem for any presentation of the KP equation. Change of coordinate formulae could also be provided.
Let F = 0 be a system of PDEs. Here F ∈ P , where P is the module (in the algebraic sense) of vector functions P = {J^{r}(n,m) → ℝ^{k}}.
The Hamiltonian operators which have been computed in the previous Section are differential operators sending generating functions of conservation laws into generating functions of symmetries for the above system of PDEs:
 (16.60) 
As the operators are mainly used to define a bracket operation and a Lie algebra structure on conservation laws, two properties are required: skewadjointness H^{*} = H (corresponding with skewsymmetry of the bracket) and [H,H] = 0 (corresponding with the Jacobi property of the bracket).
In order to compute the two properties we proceed as follows. Skewadjointness is checked by computing the adjoint and verifying that the sum with the initial operator is zero.
In the case of evolutionary equations, P = ϰ, and Hamiltonian operators (16.60) can also be interpreted as variational bivectors, ie
 (16.61) 
where the correspondence is given by
 (16.62) 
In terms of the corresponding superfunctions:
Note that the product p_{kσ}p_{i} is anticommutative since p’s are odd variables.
After that a differential operator of the type of H has been converted into a bivector it is possible to apply the formulae (16.56) and (16.57) in order to compute the Schouten bracket. This is what we will see in next section.
We can do the above computations using KdV equation as a test case (see the file kdv_lho3.red).
Let us load the above operators:
We may convert the two operators into the corresponding superfunctions
The result of the conversion is
Skewadjointness is checked at once:
and the result of the last two commands is zero.
Then we shall convert the two superfunctions into bivectors:
The output is:
Finally, the three Schouten brackets [Ĥ_{i},Ĥ_{j}] are computed, with i,j = 1,2:
the result are wellknown lists of zeros.
This subsection refers to the the example file wdvv_biham1.red. The simplest nontrivial case of the WDVV equations is the thirdorder Monge–Ampère equation, f_{ttt} = f_{xxt}^{2}  f_{xxx}f_{xtt} [?]. This PDE can be transformed into hydrodynamic form,

via the change of variables a = f_{xxx}, b = f_{xxt}, c = f_{xtt}. This system possesses two Hamiltonian formulations [?]:

with the homogeneous firstorder Hamiltonian operator
with the Hamiltonian H_{1} = ∫ cdx, and the homogeneous thirdorder Hamiltonian operator
Both operators are of Dubrovin–Novikov type [?, ?]. This means that the operators are homogeneous with respect to the grading D_{x} = 1. It follows that the operators are forminvariant under point transformations of the dependent variables, u^{i} = u^{i}(ũ^{j}). Here and in what follows we will use the letters u^{i} to denote the dependent variables (a,b,c). Under such transformations, the coefficients of the operators transform as differentialgeometric objects.
The operator A_{1} has the general structure
where the covariant metric g_{1ij} is flat, Γ_{k}^{ij} = g_{1}^{is}Γ_{sk}^{j} (here g_{1}^{ij} is the inverse matrix that represent the contravariant metric induced by g_{1ij}), and Γ_{sk}^{j} are the usual Christoffel symbols of g_{1ij}.
The operator A_{2} has the general structure
 (16.63) 
where the inverse g_{2ij} of the leading term transforms as a covariant pseudoRiemannian metric. From now on we drop the subscript 2 for the metric of A_{2}. It was proved in [?] that, if we set c_{ijk} = g_{iq}g_{jp}c_{k}^{pq}, then
and the metric fulfills the following identity:
 (16.64) 
This means that the metric is a Monge metric [?]. In particular, its coefficients are quadratic in the variables u^{i}. It is easy to input the two operators in CDE. Let us start by A_{1}: we may define its entries one by one as follows
We could also use one specialized Reduce package for the computation of the Christoffel symbols, like RedTen or GRG. Assuming that the operators gamma_hi(i,j,k) have been defined equal to Γ_{k}^{ij} and computed in the system using the inverse matrix g_{ij} of the leading coefficient contravariant metric^{10}
then, provided we defined a list dep_var of the dependent variables, we could set
and
The third order operator can be reconstructed as follows. Observe that the leading contravariant metric is
Introduce the above matrix in REDUCE as gu3. Then set
and define c_{ijk} as
Then define c_{k}^{ij}
Introduce the contracted operator
Finally, define the operator A_{2}
Now, we can test the Hamiltonian property of A_{1}, A_{2} and their compatibility:
Needless to say, the result of the last three command is a list of zeroes.
We observe that the same software can be used to prove the biHamiltonianity of a 6component WDVV system [?].
The formulae (16.56), (16.57) hold also in the case of multidimensional operators, ie operators with total derivatives in more than one independent variables. Here we give one Hamiltonian operator H and we give two more variational bivectors P_{1}, P_{2}; all operators are of Dubrovin–Novikov type (homogeneous). We check the compatibility by computing [H,P_{1}] and [H,P_{2}]. Such computations are standard for the problem of computing the Hamiltonian cohomology of H.
This example has been provided by M. Casati. The file of the computation is dn2d_sb1.red. The dependent variables are p^{1}, p^{2}.
Let us set
(16.65) (16.66) 
P_{1}^{11} =  2 ∂g_ ∂p1 p_{y}^{2}D_{ x} + ∂g _ ∂p1 p_{xy}^{2} + ∂g ___ ∂p1∂p2 p_{x}^{2}p_{ y}^{2} + ∂g _ ∂2p1 p_{x}^{1}p_{ y}^{2}  
P_{1}^{21} =   fD_{ x}^{2} + gD_{ y}^{2} + ∂g _ ∂p2 p_{y}^{2}D_{ y}  ( ∂f_ ∂p1 p_{x}^{1} + 2 ∂f_ ∂p2 p_{x}^{2})D_{ x}  
 ∂f _ ∂2p2 p_{x}^{2}p_{ x}^{2}  ∂f ___ ∂p1∂p2 p_{x}^{1}p_{ x}^{2}  ∂f _ ∂p2 p_{2}^{2}x;  
P_{1}^{12} =  fD_{ x}^{2}  gD_{ y}^{2} + ∂f _ ∂p1 p_{x}^{1}D_{ x}  ∂g _ ∂p2 p_{y}^{2} + 2 ∂g_ ∂p1 p_{y}^{1}D_{y}  
 ∂g _ ∂2p1 p_{y}^{1}p_{ y}^{1}  ∂g ___ ∂p1∂p2 p_{y}^{1}p_{ y}^{2}  ∂g _ ∂p1 p_{2y}^{1};  
P_{1}^{22} =  2 ∂f_ ∂p2 p_{x}^{1}D_{ y} + ∂f _ ∂p2 p_{x}^{1}y + ∂f ___ ∂p1∂p2 p_{x}^{1}p_{ y}^{1} + ∂f _ ∂2p2 p_{x}^{1}p_{ y}^{2}; 
Let us check the skewadjointness of the above bivectors:
Of course the last three commands produce two zeros each.
Let us compute Schouten brackets.
sb11(1) is trivially a list of zeros, while sb12(1) is nonzero and sb13(1) is again zero.
More formulae are currently being implemented in the system, like symplecticity and Nijenhuis condition for recursion operators [?]. Interested readers are warmly invited to contact R. Vitolo for questions/feature requests.
In this section we will show an experimental way to find nonlocal operators. The word ‘experimental’ comes from the lack of a comprehensive mathematical theory of nonlocal operators; in particular, it is still missing a theoretical framework for Schouten brackets of nonlocal opeartors in the odd variable language.
In any case we will achieve the results by means of a covering of the cotangent covering. Indeed, it can be proved that there is a 1  1 correspondence between (higher) symmetries of the initial equation and conservation laws on the cotangent covering. Such conservation laws provide new potential variables, hence a covering (see [?] for theoretical details on coverings).
In Section 16.12.26 we will also discuss a procedure for finding conservation laws from their generating functions that is of independent interest.
Here we will compute some nonlocal Hamiltonian operators for the KdV equation. The result of the computation (without the details below) has been published in [?].
We have to solve equations of the type ddx(ct)ddt(cx) as in 16.12.14. The main difference is that we will attempt a solution on the ℓ^{*}covering (see Subsection 16.12.15). For this reason, first of all we have to determine covering variables with the usual mechanism of introducing them through conservation laws, this time on the ℓ^{*}covering.
As a first step, let us compute conservation laws on the ℓ^{*}covering whose components are linear in the p’s. This computation can be found in the file kdv_nlcl1 and related results and debug files.
The conservation laws that we are looking for are in 1  1 correspondence with symmetries of the initial equation [?]. We will look for conservatoin laws which correspond to Galilean boost, xtranslation, ttranslation at the same time. In the case of 2 independent variables and 1 dependent variable, one could prove that one component of such conservation laws can always be written as sym*p as follows:
The second component must be found by solving an equation. To this aim we produce the ansatz
where we already introduced the sets linodd6 and linodd8 of 6th and 8th degree monomials which are linear in odd variables (see the source code). For the first conservation law solutions of the equation
are found by hand due to the presence of ‘t’ in the symmetry:
We also have the equations
They are solved in the usual way (see the source code of the example and the results file kdv_nlcl1_res).
Now, we solve the equation for shadows of nonlocal symmetries in a covering of the ℓ^{*}covering (source file kdv_nlho1). We can produce such a covering by introducing three new nonlocal (potential) variables ra,rb,rc. We are going to look for nonlocal Hamiltonian operators depending linearly on one of these variables. To this aim we modify the odd part of the equation to include the components of the above conservation laws as the derivatives of the new nonlocal variables r1, r2, r3:
The scale degree analysis of the local Hamiltonian operators of the KdV equation leads to the formulation of the ansatz
where linext is the list of graded mononials which are linear in odd variables and have degree 7 (see the source file). The equation for shadows of nonlocal symmetries in ℓ^{*}covering
is solved in the usual way, obtaining (in odd variables notation):
Higher nonlocal Hamiltonian operators could also be found [?]. The CRACK approach also holds for nonlocal computations.
Following the ideas in [?], a differential operator that sends symmetries into symmetries can be found as a shadow of symmetry on the ℓcovering of the KdV equation, with the further condition that the shadows must be linear in the covering qvariables. The tangent covering of the KdV equation is
and we have to solve the equation (phi) = 0, where _{KdV } means that the linearization of the KdV equation is lifted over the tangent covering.
_{KdV }The file containing this example is kdv_ro1.red. The example closely follows the computational scheme presented in [?].
Usually, recursion operators are nonlocal: operators of the form D_{x}^{1} appear in their expression. Geometrically we interpret this kind of operator as follows. We introduce a conservation law on the cotangent covering of the form
where rt = uq + q_{xx} and rx = q. It has the remarkable feature of being linear with respect to qvariables. A nonlocal variable r can be introduced as a potential of ω, as r_{x} = rx, r_{t} = rt. A computation of shadows of symmetries on the system of PDEs
yields, analogously to the previous computations,
The operator q stands for the identity operator, which is (and must be!) always a solution; the other solution corresponds to the Lenard–Magri operator
The Plebanski (or second Heavenly) equation
 (16.67) 
is Lagrangian. This means that its linearization is selfadjoint: ℓ_{F } = ℓ_{F }^{*}, so that the tangent and cotangent covering coincide, its odd equation being
 (16.68) 
It is not difficult to realize that the above equation can be written in explicit conservative form as
 (16.69) 
We can introduce a potential r for the above 2component conservation law. Namely, we can assume that
 (16.70) 
This is a new nonlocal variable for the (co)tangent covering of the Plebanski equation. We can load the Plebanski equation together with its nonlocal variable r as follows:
We can easily verify that the integrability condition for the new nonlocal variable holds:
the result is 0.
Now, we look for nonlocal recursion operators in the tangent covering using the new nonlocal odd variable r. We can load the equation exactly as before. We look for recursion operators which depend on r (which has scale degree 4); we produce the following ansatz for phi:
then we solve the equation of shadows of symmetries:
The solution is
hence we obtain the identity operator p and the new nonlocal operator r. It can be proved that changing coordinates to the evolutionary presentation yields the local operator (which has a much more complex expression than the identity operator) and one of the nonlocal operators of [?]. More details on this computation can be found in [?].
A short version history is provided here.
CDE 1.0 This version was published in October 2014. It was programmed in REDUCE’s algebraic mode, so its capabilities were limited, and its speed was severely affected by the systematic use of the package assist for manipulating algebraic lists. Its features were:
CDE 1.0 has never ben included in the official REDUCE distribution, and it is still available at [?].
[1] Geometry of Differential Equations web site: http://gdeq.org.
[2] A. V. BOCHAROV, V. N. CHETVERIKOV, S. V. DUZHIN, N. G. KHOR′KOVA, I. S. KRASIL′SHCHIK, A. V. SAMOKHIN, YU. N. TORKHOV, A. M. VERBOVETSKY AND A. M. VINOGRADOV: Symmetries and Conservation Laws for Differential Equations of Mathematical Physics, I. S. Krasil′shchik and A. M. Vinogradov eds., Translations of Math. Monographs 182, Amer. Math. Soc. (1999).
[3] D. BALDWIN, W. HEREMAN, A symbolic algorithm for computing recursion operators of nonlinear partial differential equations, International Journal of Computer Mathematics, vol. 87 (5), pp. 10941119 (2010).
[4] B.A. DUBROVIN, Geometry of 2D topological field theories, Lecture Notes in Math. 1620, SpringerVerlag (1996) 120–348.
[5] B.A. DUBROVIN AND S.P. NOVIKOV, Hamiltonian formalism of onedimensional systems of hydrodynamic type and the BogolyubovWhitham averaging method, Soviet Math. Dokl. 27 No. 3 (1983) 665–669.
[6] B.A. DUBROVIN AND S.P. NOVIKOV, Poisson brackets of hydrodynamic type, Soviet Math. Dokl. 30 No. 3 (1984), 651–2654.
[7] E.V. FERAPONTOV, C.A.P. GALVAO, O. MOKHOV, Y. NUTKU, BiHamiltonian structure of equations of associativity in 2d topological field theory, Comm. Math. Phys. 186 (1997) 649669.
[8] E.V. FERAPONTOV, M.V. PAVLOV, R.F. VITOLO, Projectivegeometric aspects of homogeneous thirdorder Hamiltonian operators, J. Geom. Phys. 85 (2014) 1628, DOI: 10.1016/j.geomphys.2014.05.027.
[9] E.V. FERAPONTOV, M.V. PAVLOV, R.F. VITOLO, Towards the classification of homogeneous thirdorder Hamiltonian operators, http://arxiv.org/abs/1508.02752
[10] E. Getzler, A Darboux theorem for Hamiltonian operators in the formal calculus of variations, Duke J. Math. 111 (2002), 535560.
[11] S. IGONIN, A. VERBOVETSKY, R. VITOLO: Variational Multivectors and Brackets in the Geometry of Jet Spaces, V Int. Conf. on on Symmetry in Nonlinear Mathematical Physics, Kyiv 2003; Part 3 of Volume 50 of Proceedings of Institute of Mathematics of NAS of Ukraine, Editors A.G. Nikitin, V.M. Boyko, R.O. Popovych and I.A. Yehorchenko (2004), 1335–1342; http://www.imath.kiev.ua/~snmp2003/Proceedings/vitolo.pdf.
[12] P.H.M. KERSTEN, I.S. KRASIL’SHCHIK, A.M. VERBOVETSKY, Hamiltonian operators and ℓ^{*}covering, Journal of Geometry and Physics 50 (2004), 273–302.
[13] P.H.M. KERSTEN, I.S. KRASIL’SHCHIK, A.M. VERBOVETSKY, A geometric study of the dispersionless Boussinesq equation, Acta Appl. Math. 90 (2006), 143–178.
[14] P. KERSTEN, I. KRASIL′SHCHIK, A. VERBOVETSKY, AND R. VITOLO, Hamiltonian structures for general PDEs, Differential equations: Geometry, Symmetries and Integrability. The Abel Symposium 2008 (B. Kruglikov, V. V. Lychagin, and E. Straume, eds.), SpringerVerlag, 2009, pp. 187–198, arXiv:0812.4895.
[15] I. KRASIL′SHCHIK AND A. VERBOVETSKY, Geometry of jet spaces and integrable systems, J. Geom. Phys. (2011) doi:10.1016/j.geomphys.2010.10.012, arXiv:1002.0077.
[16] I. KRASIL′SHCHIK, A. VERBOVETSKY, R. VITOLO, A unified approach to computation of integrable structures, Acta Appl. Math. (2012).
[17] I. KRASIL′SHCHIK, A. VERBOVETSKY, R. VITOLO, The symbolic computation of integrability structures for partial differential equations, book, to appear in the Springer series “Texts and monographs in symbolic computations” (2017).
[18] B. KUPERSCHMIDT: Geometric Hamiltonian forms for the Kadomtsev–Petviashvili and Zabolotskaya–Khokhlov equations, in Geometry in Partial Differential Equations, A. Prastaro, Th.M. Rassias eds., World Scientific (1994), 155–172.
[19] M. MARVAN, Sufficient set of integrability conditions of an orthonomic system. Foundations of Computational Mathematics 9 (2009), 651–674.
[20] F. NEYZI, Y. NUTKU, AND M.B. SHEFTEL, MultiHamiltonian structure of Plebanski’s second heavenly equation J. Phys. A: Math. Gen. 38 (2005), 8473. arXiv:nlin/0505030v2.
[21] A.C. NORMAN, R. VITOLO, Inside Reduce, part of the official REDUCE documentation included in the source code, see below.
[22] M.C. NUCCI, Interactive REDUCE programs for calculating classical, nonclassical, and approximate symmetries of differential equations, in Computational and Applied Mathematics II. Differential Equations, W.F. Ames, and P.J. Van der Houwen, Eds., Elsevier, Amsterdam (1992) pp. 345–350.
[23] M.C. NUCCI, Interactive REDUCE programs for calculating Lie point, nonclassical, LieBäcklund, and approximate symmetries of differential equations: manual and floppy disk, in CRC Handbook of Lie Group Analysis of Differential Equations. Vol. 3 N.H. Ibragimov, Ed., CRC Press, Boca Raton (1996) pp. 415–481.
[24] F. OLIVERI, RELIE, REDUCE software and user guide, http://mat521.unime.it/oliveri/.
[25] P. OLVER, Applications of Lie Groups to Partial Differential Equations, 2nd ed, GTM Springer, 1992.
[26] M.V. PAVLOV, R.F. VITOLO: On the biHamiltonian geometry of the WDVV equations, http://arxiv.org/abs/1409.7647
[27] G. SACCOMANDI, R. VITOLO: On the Mathematical and Geometrical Structure of the Determining Equations for Shear Waves in Nonlinear Isotropic Incompressible Elastodynamics, J. Math. Phys. 55 (2014), 081502.
[28] T. WOLF, A comparison of four approaches to the calculation of conservation laws, Euro. Jnl of Applied Mathematics 13 part 2 (2002) 129152.
[29] T. WOLF, APPLYSYM  a package for the application of Liesymmetries, software distributed together with the computer algebra system REDUCE, (1995).
[30] T. WOLF, A. BRAND, Investigating DEs with CRACK and Related Programs, SIGSAM Bulletin, Special Issue, (June 1995), p 18.
[31] T. WOLF, An efficiency improved program LIEPDE for determining Liesymmetries of PDEs, Proc.of Modern Group Analysis: advanced analytical and computational methods in mathematical physics, Catania, Italy Oct.1992, Kluwer Acad.Publ. (1993) 377385.
[32] T. WOLF, A. BRAND: CRACK, user guide, examples and documentation http://lie.math.brocku.ca/Crack_demo.html. For applications, see also the publications of T. Wolf.
Up  Next  Prev  PrevTail  Front 