Air Force Institute of Technology Air Force Institute of Technology
AFIT Scholar AFIT Scholar
Theses and Dissertations Student Graduate Works
3-2023
Regular Simplices Within Doubly Transitive Equiangular Tight Regular Simplices Within Doubly Transitive Equiangular Tight
Frames Frames
Evan C. Lake
Follow this and additional works at: https://scholar.a8t.edu/etd
Part of the Mathematics Commons
Recommended Citation Recommended Citation
Lake, Evan C., "Regular Simplices Within Doubly Transitive Equiangular Tight Frames" (2023).
Theses and
Dissertations
. 6905.
https://scholar.a8t.edu/etd/6905
This Thesis is brought to you for free and open access by the Student Graduate Works at AFIT Scholar. It has been
accepted for inclusion in Theses and Dissertations by an authorized administrator of AFIT Scholar. For more
information, please contact AFIT[email protected].
REGULAR SIMPLICES WITHIN
DOUBLY TRANSITIVE
EQUIANGULAR TIGHT FRAMES
THESIS
Evan C. Lake, SSgt, USAF
AFIT-ENC-MS-23-M-004
DEPARTMENT OF THE AIR FORCE
AIR UNIVERSITY
AIR FORCE INSTITUTE OF TECHNOLOGY
Wright-Patterson Air Force Base, Ohio
DISTRIBUTION STATEMENT A
APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED.
The views expressed in this document are those of the author and do not reflect the
official policy or position of the United States Air Force, the United States Department
of Defense or the United States Government. This material is declared a work of the
U.S. Government and is not subject to copyright protection in the United States.
AFIT-ENC-MS-23-M-004
REGULAR SIMPLICES WITHIN
DOUBLY TRANSITIVE EQUIANGULAR TIGHT FRAMES
THESIS
Presented to the Faculty
Department of Mathematics and Statistics
Graduate School of Engineering and Management
Air Force Institute of Technology
Air University
Air Education and Training Command
in Partial Fulfillment of the Requirements for the
Degree of Master of Science in Applied Mathematics
Evan C. Lake, B.S.
SSgt, USAF
March 23, 2023
DISTRIBUTION STATEMENT A
APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED.
AFIT-ENC-MS-23-M-004
REGULAR SIMPLICES WITHIN
DOUBLY TRANSITIVE EQUIANGULAR TIGHT FRAMES
THESIS
Evan C. Lake, B.S.
SSgt, USAF
Committee Membership:
Matthew Fickus, Ph.D.
Chair
John Jasper, Ph.D.
Member
Takayuki Iguchi, Ph.D.
Capt, USAF
Member
AFIT-ENC-MS-23-M-004
Abstract
An equiangular tight frame (ETF) yields an optimal way to pack a given number
of lines into a given space of lesser dimension. Every ETF has minimal coherence,
and this makes it potentially useful for compressed sensing. But, its usefulness also
depends on its spark: the size of the smallest linearly dependent subsequence of
the ETF. When formed into a sensing matrix, a larger spark means a lower chance
that information is lost when sensing a sparse vector. Spark is difficult to compute
in general, but if an ETF contains a regular simplex, then every such simplex is a
linearly dependent subsequence of the ETF of minimal size, and so its spark is known.
The “binder” of an ETF indicates all of the regular simplices that the ETF contains.
If the binder of an ETF is empty, then it contains no regular simplex, and so its spark
is larger than otherwise guaranteed. Proving that either holds, namely proving that
a particular ETF’s binder is empty or instead proving that it is not, provides useful
information about an ETF’s suitability for compressed sensing. In this thesis, we
focus on doubly transitive equiangular tight frames (DTETFs), namely ETFs whose
symmetry group acts in a doubly transitive way. We show that the binder of any
DTETF is either empty or forms a balanced incomplete block design (BIBD), a fact
that applies to several known infinite families of ETFs. We then fully characterize
the binders of every member of one such infinite family, symplectic ETFs, and their
Naimark complements. We show that all but a finite number of symplectic ETFs
have an empty binder. We also provide a general, closed-form expression for the
binder of the symplectic ETF’s Naimark complement and the number of blocks that
it contains; this disproves a conjecture posed in the recent literature.
iv
Table of Contents
Page
Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv
I. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Mathematical Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.3 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
II. Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.1 Finite Frame Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.2 The Restricted Isometry Property, Coherence, and the
Spark Bound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.3 Equiangular Tight Frames . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
III. Binders and Double Transitivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.1 Binders of Equiangular Tight Frames . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.1.1 Binders and Spark . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.1.2 Balanced Incomplete Block Designs . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.1.3 Binders and Balanced Incomplete Block Designs . . . . . . . . . . . . . . 26
3.2 Equivalence and Symmetry Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.2.1 Symmetry Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.3 Doubly Transitive Equiangular Tight Frames . . . . . . . . . . . . . . . . . . . . . . . 31
IV. Symplectic Equiangular Tight Frames . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.1 Symplectic Forms and Symplectic Equiangular Tight
Frames . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.2 Hyperbolic Bases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.3 Symplectic Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
4.4 The Binder of the Naimark Complement of the
Symplectic Equiangular Tight Frame . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
4.5 The Binder of the Symplectic Equiangular Tight Frame . . . . . . . . . . . . . 53
V. Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
v
REGULAR SIMPLICES WITHIN
DOUBLY TRANSITIVE EQUIANGULAR TIGHT FRAMES
I. Introduction
Let us pose a toy problem: imagine there are sixteen people with mobile phones,
all connected to the same tower. Each of the phones needs to send data to the tower,
and the tower needs to be able to distinguish between the phones even when they
are transmitting simultaneously. One way of achieving this is to assign each phone a
column of a 16 × 16 Hadamard matrix, that is, a matrix whose entries are either 1
or 1 and whose columns are orthogonal. When each phone wants to communicate
a 0, it transmits its column vector. If it wants to communicate a 1, it transmits the
negative of its column vector. The tower receives these transmissions all at once as
a linear combination of the transmitted signals, and multiplies that received signal
vector by the transpose of the Hadamard matrix. Since the phones’ columns are
orthogonal, for each column this multiplication will “zero out” the signals of other
phones, leaving a value that is either positive or negative depending on which bit the
phone was transmitting. This idea is the foundation of a popular real-world digital
communication scheme known as code division multiple access (CDMA).
The integrity of this system is excellent, but its efficiency may be poor under
certain circumstances. Any time a phone wants to send a single bit, it instead has to
send sixteen (and this is, of course, a very small example). In particular, there may be
a great deal of wasted efficiency if the system is only being used in a sparse manner,
namely that at any given moment, most of the phones aren’t actively transmitting.
Why use a full 16 ×16 matrix if only two or three phones are transmitting at a time?
1
Let us imagine that we instead assign each phone a column from a matrix Φ of size
6 × 16. Clearly, as rank(Φ) 6, these columns cannot be orthogonal. In particular,
any seven columns of this matrix are linearly dependent, and so certain combinations
of transmissions can be confused with others: four transmissions might be confused
with three and vice versa, and similarly five transmissions with two, six with one, and
seven with none. This chance can be reduced, however, if we choose the columns of
Φ to be as orthogonal as possible in specific ways, and we moreover know that only
a certain small number of phones are transmitting at any given time. Notably under
this scheme, we do not need to know which phones are transmitting, just that the
number of them is small enough.
This is an example of a dimensionality reduction problem; how can we be more
efficient in collecting, transmitting, and analyzing data while reducing the risk of los-
ing information? Such problems arise in a variety of real world applications including
waveform design for wireless communication [22], compressed (remote) sensing [8]
and quantum information theory [26]. The U.S. Department of Defense especially
the U.S. Air Force and U.S. Space Force is interested in a number of these appli-
cations [25]. This field of study provides a mathematical framework for helping to
answer questions like “What are efficient ways to gather and transmit data?” and
“Can we get essentially the same information with fewer measurements?”
Frame theory, and equiangular tight frames in particular, are commonly used
to design matrices suitable for such purposes. In this thesis, we will summarize
this field of study and discuss how it relates to certain open mathematical problems
in compressed sensing. We will then prove several new results, including one that
disproves a conjecture made in the recent literature [4], and then proves a modified
version of it. Using our mobile phone analogy, our new results will help us answer the
questions “How many phones can transmit simultaneously with a guarantee that no
2
information will be lost?” and “If too many phones are transmitting simultaneously,
what is the probability of information loss?”
1.1 Mathematical Introduction
Let Φ := {φ
n
}
n∈N
be a finite sequence of N nonzero vectors in a Hilbert space
E over F {R, C} of dimension D < N. We would like to be able to recover any
vector x F
N
from y = Φx :=
P
n∈N
x(n)φ
n
uniquely and practically, under the
assumption that x is sparse, that is, when x
0
:= |{n N : x(n) ̸= 0}| is sufficiently
small. In terms of our toy problem, the latter assumption models our requirement
that only a few phones are transmitting at any given moment.
Since D < N, there are, of course, multiple solutions of Φx = y. But, if such
solutions x
1
and x
2
are sparse, then they must differ by a member x
1
x
2
of
ker(Φ) := {x F
N
: Φx = 0}
that is also sparse, though possibly less sparse than x
1
and x
2
themselves. So then
we can guarantee a unique sparse solution if we know that
spark(Φ) := min{∥x
0
: 0 ̸= x ker(Φ)}
is sufficiently large, that is, if every K-vector subsequence Φ
K
:= {φ
n
}
n∈K
of Φ is
linearly independent for large enough values of K. In our toy problem for example,
assume that only two phones are transmitting at a time, and that any four of the
vectors in {φ
n
}
n∈N
are linearly independent (spark(Φ) > 4). Then any received signal
can only arise in a unique way: since spark(Φ) > 4, for any y R
6
there is at most
one x R
16
with x
0
2 such that y = Φx. Any other combination of transmissions
that yields the same received signal would require at least three active transmitters.
3
So as long as the tower has an efficient way to identify which phones those are, there
is no risk of confusion.
If Φ has the restricted isometry property (RIP) every K-vector subsequence Φ
K
is approximately orthogonal in a particular way then convex optimization can be
used to recover this sufficiently sparse x from y = Φx [6]. Randomly chosen Φ are
likely to have the RIP for remarkably large values of K, but certifying the RIP for a
general Φ is computationally hard [2], as is computing the spark of a general Φ [1]. So
if we choose a random Φ of sufficient size for a larger-scale version of our toy problem,
it is extremely likely that Φ will have the properties we require for guaranteed sparse
signal recovery. But it might not, and even if it does, we will not be able to certify
that fact. While it would be nice to have a deterministic Φ which is guaranteed to
have the RIP to a degree similar to that of a random Φ, the problem of constructing
one remains stubbornly open [5, 3].
While the RIP is difficult to certify, the coherence of Φ,
coh(Φ) := max
n
1
̸=n
2
|⟨φ
n
1
, φ
n
2
⟩|
φ
n
1
∥∥φ
n
2
, (1)
is a related concept that is much simpler to calculate which still provides information
about spark(Φ). In particular, Donoho and Elad’s spark bound [7]
spark(Φ)
1
coh(Φ)
+ 1, (2)
tells us that a Φ with low coherence may be well-suited for compressed sensing, that is
Φ may be well-suited to uniquely determine a sparse x by y = Φx. Recall from our toy
problem that a larger spark means more phones can transmit simultaneously without
losing information. A large spark alone does not guarantee that a practical algorithm
for recovering x from y exists. However, when Φ has low coherence, methods such
4
as orthogonal matching pursuit can be used to uniquely recover any x F
N
from
y = Φx with x
0
K <
1
2
{[coh(Φ)]
1
+ 1} [23].
A second bound which is important in terms of coherence and therefore spark is
the Welch bound [24]
coh(Φ)
N D
D(N 1)
1
, (3)
which provides a lower bound on the coherence of any such sequence of vectors
{φ
n
}
n∈N
in a space of dimension D, and has proven useful in recent work on this
problem.
Assuming without loss of generality that Φ is equal norm, it is known that it
achieves equality in the Welch bound and thus has minimal coherence – if and only
if it is an equiangular tight frame (ETF) for E, that is, there exists A > 0 such that
P
n∈N
φ
n
, yφ
n
= Ay for all y E (tightness) and C > 0 such that |⟨φ
n
1
, φ
n
2
⟩| = C
when n
1
̸= n
2
(equiangularity). Many infinite families of ETFs are known. They
arise, for example, from algebro-combinatorial designs; see [16] for a survey. It is also
known that every ETF Φ has a Naimark complement Ψ which is itself an N-vector
ETF for a space of dimension N D. As we will explore in this thesis, it is possible
to simultaneously analyze an entire family of ETFs for certain types of suitability
for compressed sensing. But, because of the sheer variety of ways in which ETFs are
constructed, it seems unlikely that one could generalize such analysis to ETFs as a
whole. Our approach relies on understanding which, if any, subsequences of a given
ETF themselves form a special type of ETF known as a regular simplex.
To explain, a regular simplex is an N-vector ETF {φ
n
}
n∈N
for a Hilbert space of
dimension D = N 1. Sometimes, given a more general ETF Φ for some Hilbert
space E, it is possible to find a K-vector subsequence Φ
K
:= {φ
n
}
n∈K
of Φ such that
Φ
K
is a regular simplex for some proper subspace of E. We refer to the set of all
subsets K of N such that Φ
K
is a regular simplex as the binder of Φ, denoted bin(Φ).
5
Some nonobvious facts from [10], that we review in detail later in this document, are
that bin(Φ) is nonempty if and only if Φ achieves equality in the spark bound (2),
and when this occurs, that each K bin(Φ) corresponds to a linearly dependent Φ
K
;
these are precisely the linearly dependent subsequences of Φ of minimal size [10]. In
particular, if bin(Φ) is nonempty then spark(Φ) =
1
coh(Φ)
+ 1, and if bin(Φ) is empty
then spark(Φ) >
1
coh(Φ)
+ 1. Knowing that either fact holds about a particular ETF
provides us information about spark(Φ) and thus also about the suitability of Φ for
compressed sensing.
Sometimes bin(Φ) forms a type of combinatorial design known as a balanced in-
complete block design (BIBD). For example, every member of certain infinite families
of Gabor-Steiner ETFs has a binder that forms a BIBD [4], and experimentation in-
dicates this is also true for various other examples [10]. When this occurs, this BIBD
provides greater insight into the ETF and its suitability for compressed sensing. For
example, if bin(Φ) forms a BIBD, we can phase its incidence matrix to obtain a sparse
representation of the Naimark complement of Φ [14, 10]. Additionally, the number of
blocks in such a BIBD helps estimate the probability that two distinct sparse vectors
x
1
and x
2
have Φx
1
= y = Φx
2
. We caution however that many harmonic ETFs have
nontrivial binders that are seemingly not BIBDs [10, 19].
In this thesis, we will explore one condition that is sufficient to guarantee that
bin(Φ) is either empty or a BIBD. Namely, we will use the recent realization [20, 21]
that many ETFs are doubly transitive. That is, they have the property that any
distinct pair of their indices can be mapped to any other such pair via the natural
action of their symmetry group.
6
1.2 Outline
In Chapter 2, we will establish the necessary notation and review the linear algebra
and frame theory needed to discuss ETFs and their properties. In Chapter 3, we show
that the binder of any doubly transitive equiangular tight frame (DTETF) is either
empty or forms a BIBD (Theorem 3.3.2). In Chapter 4, we show that the binder
of the Naimark complement of any symplectic ETF consists of all affine Lagrangian
subspaces in the underlying finite symplectic space and compute the parameters of
the corresponding BIBD (Theorem 4.4.3). This disproves a conjecture posed in [4],
and then proves a modified version of it. We moreover show that the binder of any
symplectic ETF is empty, except in two notable cases (Theorem 4.5.2).
1.3 Contributions
This thesis proves three new main results that directly apply to compressed sens-
ing. Theorem 3.3.2 provides a strong necessary condition on the binder of any
DTETF. Though a full characterization of such ETFs has recently been obtained
[21], and it includes a number of infinite families of them, the only instances of these
whose binders have been fully characterized are those provided in this thesis and in
the accompanying journal article [15]. Theorem 3.3.2 thus establishes a research pro-
gram within the frame theory community to determine the binder of every DTETF.
This thesis begins that program by fully characterizing the binder of two infinite
families of DTETFs, namely the symplectic ETFs and their Naimark complements;
these are provided in our second and third main results, namely Theorems 4.4.3 and
4.5.2 respectively. Our accompanying journal article [15] does the same for four other
infinite families of DTETFs.
The subject of an ETF’s binder specifically whether it forms a BIBD, and if
so, with what parameters is important to the community to such a degree that [4]
7
makes a published conjecture to this end about the members of a single infinite family
of them. Theorem 4.4.3 of this thesis proves that those binders form a BIBD and
moreover provides direct formulae for their parameters, disproving the conjecture as
stated, but also proving that it is true in spirit. Theorem 4.5.2, in contrast, implies
that every member of a highly symmetric infinite family of real ETFs has an empty
binder, something that has seldom if ever been observed for such frames before,
a result which gives hope for the continued pursuit of deterministic matrices with
excellent restricted isometry constants.
8
II. Preliminaries
2.1 Finite Frame Theory
Given any field F and any finite nonempty set N, let F
N
:= {x : N F} be
the vector space over F of all functions from N into F. We equip this space with the
standard basis {δ
n
}
n∈N
, where δ
n
(n
) = 1 when n
= n and δ
n
(n
) = 0 otherwise. We
can think of x = {x
n
}
n∈N
, a sequence of N := |N| elements of F indexed by N, as
a generalization of the ‘ordered tuple’ notation that arises by taking the Cartesian
product of N copies of F.
This generalization extends to matrices. Given two finite nonempty sets N and
M, we identify a linear mapping A from F
N
to F
M
with a member of F
M×N
. The
latter is a matrix, but generalized so that its entries are indexed by pairs of elements
from any general finite nonempty sets instead of just pairs of integers. While these
sets do not have to correlate in any way, note that the underlying field F has to be
the same. The way we identify a linear transformation A : F
N
F
N
with a matrix
A F
N ×N
is by having A(δ
n
) =
P
m∈M
A(m, n)δ
m
for all n N.
The synthesis operator of a finite sequence {φ
n
}
n∈N
of vectors in a vector space
V over some field F is Φ : F
N
V, Φx :=
P
n∈N
x(n)φ
n
. This mapping Φ can be
regarded as a concatenation of {φ
n
}
n∈N
into a ‘matrix’ of N columns indexed by N
(if we are willing to consider ‘matrices’ whose columns can be any kind of vector).
Since this concatenation is a one-to-one correspondence, the synthesis operator Φ is
identified with a unique {φ
n
}
n∈N
defined by φ
n
:= Φδ
n
. Because the two are uniquely
paired in this way, we will occasionally refer to {φ
n
}
n∈N
as Φ.
Additionally note that there is a one-to-one correspondence between N and the
vectors of Φ. Since N is simpler notationally than {φ
n
}
n∈N
, many of our definitions
will refer to this indexing set of the vectors rather than the vectors themselves.
9
So then, Φx is the linear combination obtained by summing the N entries of x
times the corresponding vectors of {φ
n
}
n∈N
. Therefore the range of Φ is the set of
all such linear combinations, that is, im(Φ) = span({φ
n
}
n∈N
). We also note that
{φ
n
}
n∈N
is linearly independent if and only if ker(Φ) = {0}, that is, when Φ is one-
to-one. Thus, Φ is invertible if and only if it has this condition and is also onto. This
will happen if and only if {φ
n
}
n∈N
is a basis for the underlying vector space V where
{φ
n
}
n∈N
lies.
When F is either R or C, we equip F
N
with the inner product
x
1
, x
2
:=
X
n∈N
x
1
(n)x
2
(n).
We use inner products that are conjugate-linear in their left arguments since this
simplifies other notation throughout.
Now assume V is some Hilbert space E and so F is either R or C. In this setting
Φ has an adjoint. It is the analysis operator Φ
: E F
N
,
y)(n) = φ
n
, y. If we
regard a single vector φ E as the mapping φ : F E, φ(z) := zφ, its adjoint is the
linear functional φ
: E F, (φ
y) = φ
n
, y.
The frame operator of Φ is ΦΦ
: E E, ΦΦ
=
P
n∈N
φ
n
φ
n
. The Gram matrix
of Φ is Φ
Φ F
N ×N
,
Φ)(n
1
, n
2
) = φ
n
1
, φ
n
2
. It is known from linear algebra that
both ΦΦ
and Φ
Φ are positive semidefinite, and that
rank(ΦΦ
) = rank(Φ
Φ) = rank(Φ) = dim(span{φ
n
}
n∈N
).
Conversely, by the spectral theorem, we can factor any positive semidefinite G
F
N ×N
as G = Φ
Φ, where Φ is the synthesis operator of some finite sequence of
vectors {φ
n
}
n∈N
that lies in a Hilbert space E over F of dimension rank (G). This is
not a one-to-one correspondence; applying any unitary transformation to Φ will yield
10
another sequence of vectors that satisfies this condition. They are, however, unique
up to these unitary transformations.
When E = F
M
, Φ is the M×N matrix whose nth column is φ
n
, Φ
is its N ×M
conjugate-transpose, ΦΦ
is their M×M product, and Φ
Φ is their N ×N product.
If E ̸= F
M
but is any Hilbert space over F of the same dimension |M| as F
M
, then
there exists a unitary transformation from E onto F
M
. So for the purposes of this
thesis, we can always assume without loss of generality that E is F
M
, if so desired.
We say that {φ
n
}
n∈N
is a tight frame for E if there exists A > 0 such that
ΦΦ
= AI; namely such that y =
1
A
P
n∈N
φ
n
, yφ
n
for all y E. An orthonormal
basis is the simplest example of a tight frame, with A = 1. When {φ
n
}
n∈N
is an
orthonormal basis for E, any y E decomposes as y =
P
n∈N
φ
n
, yφ
n
where for
each n N, φ
n
, yφ
n
is the projection of y onto the one-dimensional subspace of
E that contains φ
n
. The concept of a tight frame is a generalization of this idea in
which such projections are allowed to be positively scaled, and the number of the
one-dimensional subspaces can exceed the dimension of the space, and so be linearly
dependent. When Φ is tight, it means that the scaled projections are so carefully
aligned that these projections ‘overlap’ each other in such a consistent way that they
sum to a scaled but otherwise undistorted version of the original vector.
Clearly then, any tight frame of E has to span E. So we will more generally call
{φ
n
}
n∈N
a tight frame without specifying a space, with the implicit understanding
that we are saying it is a tight frame for its span. This means that there exists some
A > 0 such that ΦΦ
y = Ay for all y im(Φ), which is equivalent to ΦΦ
Φ = AΦ.
When this occurs, P :=
1
A
Φ
Φ is a projection (orthogonal projection operator):
P
P = (
1
A
Φ
Φ)
1
A
Φ
Φ =
1
A
2
Φ
(ΦΦ
Φ) =
1
A
2
Φ
AΦ =
1
A
Φ
Φ = P.
So the Gram matrix Φ
Φ of a tight frame is a positive multiple of a projection.
11
Conversely, if we assume that Φ
Φ = AP where A > 0 and P
P = P , then since
ker(L) = ker(L
L) in general,
ker[Φ(I P )] = ker[(Φ(I P ))
Φ(I P )]
= ker[(I P
Φ(I P )]
= ker[(I P )AP (I P )]
= ker(0)
= F
N
,
where we have used the fact that P (I P ) = P P
2
= P P = 0. This implies
that 0 = Φ(I P ) = Φ
1
A
ΦΦ
Φ, and so ΦΦ
Φ = AΦ.
Thus, {φ
n
}
n∈N
is a tight frame if and only if its Gram matrix is the positive
multiple of a projection, namely if AΦ
Φ =
Φ)
2
for some A > 0, which we can
write as
Aφ
n
1
, φ
n
3
=
X
n
2
∈N
φ
n
1
, φ
n
2
⟩⟨φ
n
2
, φ
n
3
(4)
for all n
1
, n
3
N.
Since, as we discussed earlier, any positive semidefinite matrix is a Gram matrix,
this means that any positive multiple of a projection G is the Gram matrix of some
tight frame {φ
n
}
n∈N
. When this occurs,
G = Φ
Φ, dim(span(φ
n
)
n∈N
) = rank(Φ) = rank(G) =
1
A
Tr(G). (5)
If P is a projection, then I P is also a projection. So any tight frame {φ
n
}
n∈N
has a Naimark complement {ψ
n
}
n∈N
with Gram matrix Ψ
Ψ = AI Φ
Φ which is a
tight frame for a space of dimension N dim(span(φ
n
)
n∈N
), and which is unique up
to unitary transformations.
12
2.2 The Restricted Isometry Property, Coherence, and the Spark Bound
As long as none of the vectors in {φ
n
}
n∈N
are 0, these vectors can be scaled to be
unit norm. And since spark (2) and coherence (1) depend only on vector direction
and not vector length, they are invariant to this normalization.
A sequence {φ
n
}
n∈N
of unit norm vectors in E has the (K, δ)-RIP for some integer
2 K N and real number 0 δ < 1 if every K-vector subsequence {φ
n
}
n∈K
of
{φ
n
}
n∈N
is approximately orthogonal in the sense that its synthesis operator Φ
K
satisfies Φ
K
Φ
K
I
2
δ, that is, if every eigenvalue of its Gram matrix Φ
K
Φ
K
lies
in the interval [1 δ, 1 + δ]. Since δ > 1, this in particular can only happen if every
eigenvalue of Φ
K
Φ
K
lies in the open interval (0, 2). In particular, it can only happen
if ker(Φ
K
) = ker(Φ
K
Φ
K
) = {0}, that is, if each such subsequence {φ
n
}
n∈K
is linearly
independent. Therefore, as a necessary condition for Φ to have the (K, δ)-RIP for
some δ [0, 1), it must be that spark(Φ) K + 1.
One problem with attempting to show that Φ has the K-RIP is that the direct
approach is computationally intractable for all but the smallest values of K; it involves
estimating eigenvalues an already difficult and expensive problem in general for
N
K
matrices of size K ×K. However, in the specific case when K = 2, the problem of
calculating the eigenvalues of Φ
K
Φ
K
I becomes so simple that we have a closed-form
general expression for δ. The δ in this instance is in fact coh(Φ):
max
|K|=2
Φ
K
Φ
K
I
2
= max
n
1
̸=n
2
0 φ
n
1
, φ
n
2
φ
n
2
, φ
n
1
0
2
= max
n
1
̸=n
2
|⟨φ
n
1
, φ
n
2
⟩|
= coh(Φ).
With this K = 2 case in hand, the Gershgorin circle theorem provides a bound
13
on δ for larger values of K in terms of the coherence of Φ as well:
max
|K|=K
Φ
K
Φ
K
I
2
max
|K|=K
max
n
1
∈K
X
n
2
∈K\{n
1
}
|⟨φ
n
1
, φ
n
2
⟩| (K 1) coh(Φ).
This bound is often weak, but it at least provides a way to start estimating δ based
on a concept that is well-known and well-studied: coherence. We know how to com-
pute coherence quickly and, as discussed below, have some methods for constructing
sequences of vectors that have small coherence.
As one particularly noteworthy way coherence helps us understand these ideas,
from that Gershgorin circle bound we now have spark(Φ) K + 1 for any integer
K 2 such that (K 1) coh(Φ) < 1. Note that if we take K to be the ceiling of the
reciprocal of coh(Φ) then this is the spark bound (2). There are other reasons why
coherence is such a useful concept in real-world applications [22, 23], but regardless
we have now sufficiently motivated our next step: we want to understand sequences
{φ
n
}
n∈N
for which coh(Φ) is minimal.
2.3 Equiangular Tight Frames
Since the coherence of a sequence of nonzero vectors is invariant with respect to
scaling, we usually simplify the problem of designing them to have small coherence
by assuming that {φ
n
}
n∈N
is equal norm. That is, assume there exists some S > 0
such that φ
n
2
= S for all n. If {φ
n
}
n∈N
is both equal norm and a tight frame, then
we get from (5) that D := dim[span({φ
n
}
n∈N
)] =
NS
A
. This gives us A =
NS
D
.
Recall that a tight frame Φ has a Naimark complement {ψ
n
}
n∈N
satisfying Ψ
Ψ =
AI Φ
Φ, which in this case means:
ψ
n
2
=
(ND)S
D
, n N, ψ
n
1
, ψ
n
2
= −⟨φ
n
1
, φ
n
2
, n
1
, n
2
N, n
1
̸= n
2
. (6)
14
And so, as long as N > D, this means that Ψ is itself an equal norm tight frame for
its span, which is a Hilbert space of dimension N D.
In this thesis, we study equal norm tight frames that are moreover equiangular.
To see why such frames are important, we now derive the Welch bound (3) as shown
in [24]. We claim that if {φ
n
}
n∈N
is any sequence of equal norm vectors in a Hilbert
space of dimension D, then
ND
D(N 1)
1
N(N1)
X
n
1
∈N
X
n
2
∈N \{n
1
}
1
S
2
|⟨φ
n
1
, φ
n
2
⟩|
2
(7)
max
n
1
̸=n
2
1
S
2
|⟨φ
n
1
, φ
n
2
⟩|
2
= [coh(Φ)]
2
. (8)
To see this, let Φ be the synthesis operator of a sequence of equal norm vectors
{φ
n
}
n∈N
in a D-dimensional space E. Take S > 0 such that φ
n
2
= S for all n N.
As noted above, we know that Φ is a tight frame if and only if ΦΦ
=
NS
D
I. We
can rephrase this as follows: Φ is a tight frame if and only if equality holds in the
inequality 0 ΦΦ
NS
D
I
2
Fro
, where we have chosen the Frobenius norm because
it can be represented in terms of traces of matrices, which will be convenient below.
In particular,
0 ΦΦ
NS
D
I
2
Fro
= Tr[(ΦΦ
NS
D
I)
2
]
= Tr(ΦΦ
ΦΦ
2
NS
D
ΦΦ
+
N
2
S
2
D
2
I).
15
Since the trace is linear, and matrices within a trace can be cycled, this becomes
0 Tr(ΦΦ
ΦΦ
2
NS
D
ΦΦ
+
N
2
S
2
D
2
I)
= Tr(ΦΦ
ΦΦ
)
2NS
D
Tr(ΦΦ
) +
N
2
S
2
D
2
Tr(I)
= Tr(Φ
ΦΦ
Φ)
2NS
D
Tr(Φ
Φ) +
N
2
S
2
D
2
Tr(I). (9)
Since I in this instance is the identity operator on the space E of dimension D, it has
a trace of D. And since
Φ)(n
1
, n
2
) = φ
n
1
, φ
n
2
, the trace of Φ
Φ is the sum of
the entries of Φ
Φ with n
1
= n
2
, which is
P
n∈N
φ
n
2
= NS. The trace of
Φ)
2
requires more work to simplify. Given two M × N matrices A and B, the Frobenius
inner product is
A, B
Fro
:= Tr(A
B)
=
X
n∈N
(A
B)(n, n)
=
X
n∈N
X
m∈M
A
(n, m)B(m, n)
=
X
m∈M
X
n∈N
A(m, n)B(m, n).
Therefore, in the specific case where A = B,
A
2
Fro
= A, A
Fro
=
X
m∈M
X
n∈N
|A(m, n)|
2
= Tr(A
A).
When A is self-adjoint, this becomes A
2
Fro
= Tr(A
2
). Since Φ
Φ is self-adjoint we
16
can continue (9) as
0 Tr(Φ
ΦΦ
Φ)
2NS
D
Tr(Φ
Φ) +
N
2
S
2
D
2
Tr(I)
= Φ
Φ
2
Fro
2NS
D
X
n∈N
φ
n
2
+
N
2
S
2
D
=
X
n
1
∈N
X
n
2
∈N
|
Φ)(n
1
, n
2
)|
2
2N
2
S
2
D
+
N
2
S
2
D
=
X
n
1
∈N
X
n
2
∈N
|⟨φ
n
1
, φ
n
2
⟩|
2
N
2
S
2
D
.
Extracting the diagonal terms of this double sum gives
0
X
n
1
∈N
X
n
2
∈N \{n
1
}
|⟨φ
n
1
, φ
n
2
⟩|
2
+
X
n∈N
|⟨φ
n
, φ
n
⟩|
2
N
2
S
2
D
=
X
n
1
∈N
X
n
2
∈N \{n
1
}
|⟨φ
n
1
, φ
n
2
⟩|
2
+ NS
2
N
2
S
2
D
.
Rearranging this inequality gives
N(N D)S
2
D
= N(
N
D
1)S
2
=
N
2
S
2
D
NS
2
X
n
1
∈N
X
n
2
∈N \{n
1
}
|⟨φ
n
1
, φ
n
2
⟩|
2
.
Dividing this by N(N 1)S
2
gives us (7).
Now note that in the right-hand side of (7) we are summing over N(N 1) terms,
and then dividing that sum by N(N 1). Thus we are computing an average. Since
any average is no more than the largest of the terms being averaged, (8) immediately
follows.
Note that taking the square root of (8) is exactly the Welch bound (3). Equality
will hold in the Welch bound precisely when it holds in both of the inequalities (7)
and (8). As we saw earlier when discussing tight frames, equality in (7) holds if and
only if Φ is an equal norm tight frame.
17
Equality will hold in (8) precisely when |⟨φ
n
1
, φ
n
2
⟩| = max
n
1
̸=n
2
|⟨φ
n
1
, φ
n
2
⟩| for all
n
1
̸= n
2
. That is, when all inner products between any two distinct vectors are of
equal modulus, namely when there exists C 0 such that |⟨φ
n
1
, φ
n
2
⟩| = C whenever
n
1
̸= n
2
. We say that Φ is equiangular when it has this property and C > 0; we do
this to avoid degenerate cases where C = 0 and {φ
n
}
n∈N
is orthonormal.
So equality holds in both (7) and (8) precisely when Φ is both equiangular and a
tight frame, namely when Φ is an equiangular tight frame (ETF) for E. When we do
not explicitly refer to the space for which Φ is an ETF, we mean that Φ is an ETF
for the subspace of E that it spans.
From before, we know that a given matrix G F
N ×N
is the Gram matrix Φ
Φ
of a tight frame Φ if and only if it is a positive multiple of a projection. In addition,
its diagonal entries have constant positive value if and only if Φ is equal norm, and
moreover its off-diagonal entries have constant positive modulus – that is, there exists
C > 0 such that |⟨φ
n
1
, φ
n
2
⟩| = C whenever n
1
̸= n
2
if and only if Φ is equiangular.
So taking the two together, a matrix is the Gram matrix of an ETF if and only if it
is a positive multiple of a projection, has diagonal entries of constant positive value,
and has off-diagonal entries of constant positive modulus.
It follows that if {φ
n
}
n∈N
is equiangular, then it is an ETF if and only if (4) holds
for all n
1
, n
3
N. Since equiangular means |⟨φ
n
1
, φ
n
2
⟩| = C > 0 whenever n
1
̸= n
2
,
we can scale our vectors by C
1
2
without loss of generality so that |⟨φ
n
1
, φ
n
2
⟩| = 1
whenever n
1
̸= n
2
. From here we can recover an important characterization of ETFs.
Lemma 2.3.1 (Folklore). If {φ
n
}
n∈N
is equiangular, scaled without loss of generality
so that |⟨φ
n
1
, φ
n
2
⟩| = 1 for all distinct n
1
, n
2
N, then it is an ETF if and only if
S +
N1
S
=
X
n
2
∈N \{n
1
,n
3
}
φ
n
1
, φ
n
2
⟩⟨φ
n
2
, φ
n
3
⟩⟨φ
n
3
, φ
n
1
, n
1
, n
3
N, n
1
̸= n
3
, (10)
18
where φ
n
2
= S for all n.
Proof. Since Φ is equiangular, it is an ETF if and only if it is tight, namely if and
only if (4) holds for all n
1
, n
3
N. If this occurs, then applying (4) when n
1
= n
3
gives
AS = Aφ
n
1
2
=
X
n
2
∈N
φ
n
1
, φ
n
2
⟩⟨φ
n
2
, φ
n
1
=
X
n
2
∈N
|⟨φ
n
1
, φ
n
2
⟩|
2
= |⟨φ
n
1
, φ
n
1
⟩|
2
+
X
n
2
∈N \{n
1
}
|⟨φ
n
1
, φ
n
2
⟩|
2
= S
2
+ (N 1),
and so A = S +
N1
S
. For any n
1
̸= n
3
, we now know
(S +
N1
S
)φ
n
1
, φ
n
3
=
X
n
2
∈N
φ
n
1
, φ
n
2
⟩⟨φ
n
2
, φ
n
3
. (11)
Multiplying both sides of (11) by φ
n
3
, φ
n
1
gives us
S +
N1
S
=
X
n
2
∈N
φ
n
1
, φ
n
2
⟩⟨φ
n
2
, φ
n
3
⟩⟨φ
n
3
, φ
n
1
= φ
n
1
2
|⟨φ
n
1
, φ
n
3
⟩|
2
+ φ
n
3
2
|⟨φ
n
1
, φ
n
3
⟩|
2
+
X
n
2
∈N \{n
1
,n
3
}
φ
n
1
, φ
n
2
⟩⟨φ
n
2
, φ
n
3
⟩⟨φ
n
3
, φ
n
1
= 2S +
X
n
2
∈N \{n
1
,n
3
}
φ
n
1
, φ
n
2
⟩⟨φ
n
2
, φ
n
3
⟩⟨φ
n
3
, φ
n
1
,
and therefore (10) holds. Conversely, if (10) holds, then it implies that the equation
above does as well; dividing it by φ
n
3
, φ
n
1
then gives (11) whenever n
1
̸= n
3
.
19
Defining A to be S +
N1
S
then gives us that (4) holds for all n
1
, n
3
N, namely that
{φ
n
}
n∈N
is tight.
We call φ
n
1
, φ
n
2
⟩⟨φ
n
2
, φ
n
3
⟩⟨φ
n
3
, φ
n
1
the triple product of φ
n
1
, φ
n
2
, φ
n
3
. In the
next chapter, we use (10) to characterize the subsequences of an ETF that form a
special type of ETF known as a regular simplex. Note that when an ETF Φ is scaled
without loss of generality such that |⟨φ
n
1
, φ
n
3
⟩| = 1 whenever n
1
̸= n
3
, then by (6) we
also have that the Naimark complement Ψ has the property |⟨ψ
n
1
, ψ
n
3
⟩| = 1 whenever
n
1
̸= n
3
and is itself an N-vector ETF for a space of dimension N D.
20
III. Binders and Double Transitivity
In this chapter, we will be discussing double transitivity, binders of ETFs, and
how the two interact. In Section 3.1, we define the binder of an ETF and discuss
how it relates to the spark bound (2). We also review the type of combinatorial
design that is known as a balanced incomplete block design (BIBD), and discuss the
consequences of an ETF’s binder forming a BIBD. In Section 3.2, we define what it
means for two ETFs to be equivalent in a certain sense, and then use this to define
the symmetry group of an ETF. We then define double transitivity and explain what
it means to be a doubly transitive ETF (DTETF). In Section 3.3, we then use these
ideas to prove the first main result of this thesis (Theorem 3.3.2), namely that the
binder of any DTETF is either empty or forms a BIBD.
3.1 Binders of Equiangular Tight Frames
As discussed in the previous section, ETFs have minimal coherence, and thus
also the largest possible spark bound (2), and this makes them potentially useful for
compressed sensing. From this point on, let Φ = {φ
n
}
n∈N
be an N-vector ETF, scaled
so that |⟨φ
n
1
, φ
n
2
⟩| = 1 whenever n
1
̸= n
2
, and therefore φ
n
2
= S = [
D(N 1)
ND
]
1
2
for
all n, where D = rank(Φ) = dim(span({φ
n
}
n∈N
)). In particular, since coh(Φ) =
1
S
,
the spark bound (2) becomes spark(Φ) S + 1. That is, any subsequence {φ
n
}
n∈K
of {φ
n
}
n∈N
is guaranteed to be linearly independent if K < S + 1.
Clearly, any K-vector subsequence Φ
K
:= {φ
n
}
n∈K
of {φ
n
}
n∈N
will still be equian-
gular: |⟨φ
n
1
, φ
n
2
⟩| = 1 for all distinct n
1
, n
2
N, so this in particular holds for all
distinct n
1
, n
2
K N. Therefore, if Φ
K
happens to be a tight frame for its span,
then it will itself be an ETF. This concept of ETFs within ETFs is known to happen
infinitely often in various nontrivial ways [18, 17, 11, 10, 19, 9].
21
The specific case we pursue in this thesis is that when dim(span({φ
n
}
n∈K
)) is one
less than the number of vectors in K. In this instance, we call Φ
K
a regular simplex.
The binder of Φ is
bin(Φ) := {K N : {φ
n
}
n∈K
is an ETF, dim(span({φ
n
}
n∈K
)) = |K|1}.
Note under this definition, elements of bin(Φ) are defined to be subsets of the set N
of indices of our ETF Φ rather than subsequences of the ETF itself. So bin(Φ) is the
set of all subsets K N such that Φ
K
is a regular simplex.
3.1.1 Binders and Spark
As we now explain, bin(Φ) is directly related to spark(Φ) in a way that will make
it a useful tool for evaluating Φ’s suitability for compressed sensing. Theorem 3.1 of
[10] tells us bin(Φ) is nonempty if and only if spark(Φ) = S + 1, and moreover when
this occurs, K bin(Φ) if and only if Φ
K
is a linearly dependent subsequence of Φ of
minimal cardinality K = S + 1.
To see why this is true, note that if bin(Φ) is nonempty, then there exists a K-
element subset K of N such that Φ
K
is a regular simplex. Since Φ
K
is an ETF, it
must achieve equality in its Welch bound (3). Also, since every vector in Φ
K
is also
in Φ, it inherits its inner products, and so coh(Φ
K
) = coh(Φ). Thus,
1
S
=
N D
D(N 1)
1
= coh(Φ) = coh(Φ
K
) =
K (K 1)
(K 1)(K 1)
1
=
1
K 1
,
and so K = S + 1. Of particular note, this means that there is no ambiguity about
the size of such a regular simplex; if Φ
K
is a regular simplex, it must consist of S + 1
vectors. Since the spark bound (2) tells us that spark(Φ) S + 1, this means that
spark(Φ) = S + 1 and that Φ
K
is a linearly dependent subsequence of Φ of minimal
22
size.
Conversely, if spark(Φ) = S + 1, then there exists an (S + 1)-vector linearly
dependent subsequence Φ
K
of Φ. The subspace spanned by Φ
K
has dimension precisely
dim(span(Φ
K
)) = S: since spark(Φ) = S + 1 we know any set of S vectors is linearly
independent so if we remove any one vector from Φ
K
it becomes linearly independent
and so spans a space of dimension S, and returning that linearly dependent vector
does not change the dimension. Also, as described above, coh(Φ
K
) = coh(Φ) =
1
S
, so
any such Φ
K
therefore achieves equality in the Welch bound for S + 1 vectors in this
subspace. Therefore, Φ
K
is an ETF that consists of S + 1 vectors that span a space
of dimension S, and so is a regular simplex. So any such K thus lies in bin(Φ).
Since the spark bound (2) increases as coherence decreases, and we want the spark
to be large so that our matrix is better suited for compressed sensing, we have chosen
to work with ETFs. They are sequences of a given number of vectors in a space
of given dimension that achieve equality in the Welch bound, and so have minimal
coherence. They thus also have the greatest spark bound (2). That said, if an ETF
Φ has a nonempty binder then we know that it achieves equality in its spark bound
(2), which from a compressed sensing perspective is actually the worst-case scenario.
If the binder of Φ is instead empty, then this guarantees that equality does not hold
in the spark bound (2), and so this provides an improved lower bound on spark(Φ),
namely that the inequality in (2) is strict. With all of that said, there is still merit in
characterizing the binder of Φ when it is nonempty. For example, if we can determine
the cardinality of bin(Φ), then we can compare it to the number
N
S+1
of all (S + 1)-
vector subsequences of Φ to quantify how frequently the mapping x 7→ Φx confuses
minimally sparse nonzero signals x with 0.
One consequence of what we have just shown is that a given subset K of N belongs
to bin(Φ) if and only if Φ
K
is an (S + 1)-vector tight frame. If we apply (10) to Φ
K
23
in the case where N is K and N is S + 1, we see that this occurs if and only if
|K| = S + 1 and
(S 1) =
X
n
2
∈K\{n
1
,n
3
}
φ
n
1
, φ
n
2
⟩⟨φ
n
2
, φ
n
3
⟩⟨φ
n
3
, φ
n
1
for all n
1
, n
3
K with n
1
̸= n
3
. Since S 1 unimodular numbers sum to (S 1)
if and only if each of these numbers is 1, this recovers the triple-product-based
characterization of the binder given in Theorem 4.2 of [10], namely that K bin(Φ)
if and only if
|K| = S + 1, φ
n
1
, φ
n
2
⟩⟨φ
n
2
, φ
n
3
⟩⟨φ
n
3
, φ
n
1
= 1, (12)
for all n
1
, n
2
, n
3
K such that n
1
̸= n
2
̸= n
3
̸= n
1
. Although this was introduced as
a numerical aid [10], we exploit this characterization symbolically as was done in [4].
Some ETFs are known to have empty binders; for example, for ETFs whose Welch
bound S is not an integer, the idea of an (S + 1)-element subsequence is not well
defined. Other ETFs are known to have nonempty binders of varying types [10, 19].
That said, a remarkable number of ETFs at least one infinite family [4] and various
other small examples [10] – have binders which form balanced incomplete block designs
(BIBDs). When this occurs, it grants a sparse representation of that ETF’s Naimark
complement, as shown in Theorem 5.1 of [10]. Since this is a central theme of this
thesis, we now explain these ideas in detail.
3.1.2 Balanced Incomplete Block Designs
Given integers V K 2 and Λ, a BIBD (V, {K
b
}
b∈B
) is a V -element (vertex) set
V along with a nonempty finite sequence {K
b
}
b∈B
of K-element subsets of V called
blocks, where any two distinct vertices are contained in exactly Λ blocks. For any
24
given vertex v let us call R the number of blocks that contain v, and let us count
{(v
, b) V × B : v, v
K
b
, v
̸= v} in two ways. We could first choose v
(V 1
choices) and then count all of the blocks containing both v and v
choices) for a
total count of (V 1)Λ, or we could first choose a block containing v (R choices) and
then choose a v
from that block (K 1 choices) for a total count of R(K 1). This
tells us that (V 1)Λ = R(K 1), and so in particular the value of R is independent
of one’s choice of v V, and is determined by V, K, and Λ. Since we require a BIBD
to consist of at least one block and V K 2, this also implies that Λ 1.
We can similarly count B := |B|, the number of blocks in the BIBD by counting
{(v, b) V × B : v K
b
} in two ways. We could start by choosing a vertex v (V
choices) and then counting the number of blocks containing v (R choices) for a total
count of V R, or we could start by choosing a block (B choices) then counting the
vertices in each block (K choices) for a total count of BK. So BK = V R, and thus
B is also determined by the values of V, K, and Λ.
We can represent a BIBD (V, {K
b
}
b∈B
) in an easily digestible way via its incidence
matrix X R
B×V
where X(b, v) := 1 if v K
b
and X(b, v) = 0 otherwise. That is,
each row of the matrix represents a block, each column represents a vertex, and so
the matrix itself is a row-by-row ordered representation of which vertices are in which
blocks. Letting 1
V
, 1
B
, and 1
V×V
be the all-ones vectors and matrix of their respective
sizes, X satisfies
X1
V
= K1
B
, X
T
1
B
= R1
V
, X
T
X = (R Λ)I + Λ1
V×V
.
To explain the first: the bth entry of X1
V
is the sum of the entries in the bth row
of X; this is the number of vertices that lie in the bth block, namely K. Similarly,
every entry of X
T
1
B
is the number of blocks which contain a given vertex, namely
the replication number R. To explain the third: X
T
X is a V × V matrix whose
25
(v
1
, v
2
) entry is the number of blocks containing both v
1
and v
2
. When v
1
̸= v
2
(the off-diagonal entries), this is Λ. When v
1
= v
2
(the diagonal entries), this is just
the number of blocks containing a single vertex v, which we know is R. So having
X
T
X = (R Λ)I + Λ1
V×V
is a clean way to summarize both scenarios without
resorting to cases.
If we exclude degenerate cases where K = V , that is, cases where each block is
the entire set, then we have Λ <
Λ(V 1)
K1
= R. This implies that X
T
X is positive
definite and so V = rank(X
T
X) = rank(X) B. Taking this all together, we have
that a BIBD can only exist if
R =
Λ(V 1)
K 1
Z, B =
V R
K
=
ΛV (V 1)
K(K 1)
Z, K = V or V B. (13)
3.1.3 Binders and Balanced Incomplete Block Designs
We know that the binder bin(Φ) of an ETF Φ = {φ
n
}
n∈N
is a set of (S + 1)-
element subsets K of the N-element set N. Letting {K
b
}
b∈B
be any enumeration of
the members of bin(Φ), this means that (N, {K
b
}
b∈B
) is a BIBD if and only if there
exists Λ such that an pair of distinct vectors of Φ are contained in exactly Λ regular
simplices that are themselves contained in Φ.
When this occurs, for any b B, a Naimark complement of {φ
n
}
n∈K
b
is an ETF
for a space of dimension one. This is true because {φ
n
}
n∈K
b
is an ETF for a space
of dimension S that consists of (S + 1) vectors, and so its Naimark complement is
also an ETF of (S + 1) vectors for a space of dimension N D”= (S + 1) S = 1.
Without loss of generality taking this one-dimensional space to be F, (6) gives that
such a Naimark complement equates to a sequence of unimodular scalars {z
b
n
}
n∈K
b
such that z
b,n
1
z
b,n
2
= −⟨φ
n
1
, φ
n
2
for all n
1
, n
2
K
b
where n
1
̸= n
2
. We can then use
26
these scalars to repopulate X to get a phased incidence matrix:
Ψ F
B×V
, Ψ(b, n) :=
1
Λ
z
b,n
, n K
b
,
0, n / K
b
.
(14)
For any n N,
Ψ)(n, n) =
1
Λ
X
b∈B
|z
b,n
|
2
,
where |z
b,n
|
2
= 1 if n K
b
and |z
b,n
|
2
= 0 otherwise. So we can restrict the sum to
the nonzero terms
Ψ)(n, n) =
1
Λ
X
b∈B
|z
b,n
|
2
=
1
Λ
X
b∈B
n∈K
b
1.
This sum is just a count of the number of blocks b such that n K
b
, which is the
replication number R. So, by (13),
Ψ)(n, n) =
1
Λ
X
b∈B
n∈K
b
1 =
R
Λ
=
V 1
K 1
.
In this instance, N = V , K = S + 1, and S =
q
D(N 1)
ND
, so
Ψ)(n, n) =
V 1
K 1
=
N 1
S
=
(N D)S
D
.
On the other hand, when n
1
̸= n
2
,
Ψ)(n
1
, n
2
) =
1
Λ
X
b∈B
z
b,n
1
z
b,n
2
,
where our unimodular constants were chosen specifically so z
b,n
1
z
b,n
2
= −⟨φ
n
1
, φ
n
2
if
n
1
, n
2
K
b
, and
z
b,n
1
z
b,n
2
will clearly be 0 if either n
1
or n
2
is not in K
b
. So we can
27
restrict the sum to
Ψ)(n, n) =
1
Λ
X
b∈B
z
b,n
1
z
b,n
2
=
1
Λ
X
b∈B
n
1
,n
2
∈K
b
(−⟨φ
n
1
, φ
n
2
) =
1
Λ
φ
n
1
, φ
n
2
X
b∈B
n
1
,n
2
∈K
b
1,
since −⟨φ
n
1
, φ
n
2
does not depend on b and so we can factor it out of the sum. The
remaining sum is just a count of the number of blocks K
b
that contain both n
1
and
n
2
, namely Λ. Thus
Ψ)(n, n) =
1
Λ
φ
n
1
, φ
n
2
X
b∈B
n
1
,n
2
∈K
b
1 = −⟨φ
n
1
, φ
n
2
.
From (6), this means Ψ is the synthesis operator of a Naimark complement of Φ.
To summarize, if bin(Φ) forms a BIBD, then we can take Naimark complements
of each regular simplex corresponding to a member of bin(Φ), use these Naimark
complements to phase the incidence matrix of Φ, and recover a synthesis operator of
a Naimark complement of Φ itself.
3.2 Equivalence and Symmetry Groups
In general, we say that a sequence of vectors {ψ
n
}
n∈N
in a Hilbert space is equiv-
alent to another such sequence {φ
n
}
n∈N
if
∃{z
n
}
n∈N
in {z F : |z| = 1} such that ψ
n
1
, ψ
n
2
= z
n
1
z
n
2
φ
n
1
, φ
n
2
, n
1
, n
2
N.
Any cycle of inner products is preserved by this notion of equivalence, such as the
quantities φ
n
2
= φ
n
, φ
n
for any n, |⟨φ
n
1
, φ
n
2
⟩|
2
= φ
n
1
, φ
n
2
⟩⟨φ
n
2
, φ
n
1
for any n
1
28
and n
2
, and triple products
z
n
1
φ
n
1
, z
n
2
φ
n
2
⟩⟨z
n
2
φ
n
2
, z
n
3
φ
n
3
⟩⟨z
n
3
φ
n
3
, z
n
1
φ
n
1
= φ
n
1
, φ
n
2
⟩⟨φ
n
2
, φ
n
3
⟩⟨φ
n
3
, φ
n
1
.
(15)
This is of particular note: since triple products are invariant to unimodular scaling,
(10) implies that any sequence of vectors that is equivalent to an ETF {φ
n
}
n∈N
is
also an ETF, and (12) gives that these two ETFs have the same binder.
This notion of equivalence also preserves tightness (4), the RIP, coherence, and
spark. We caution, however, that there exist properties of note that are not necessarily
preserved under this definition of equivalence, such as centroidal symmetry [13] and
flatness [12]. That said, this notion of equivalence will be sufficient for our work here.
3.2.1 Symmetry Groups
Given any finite set N, a permutation group G is a subgroup of the symmetric
group Sym(N) of all permutations on N. Let {φ
n
}
n∈N
be a finite sequence of equal
norm vectors in a Hilbert space E. A symmetry of Φ is a permutation σ of the
indexing set N such that {φ
σ(n)
}
n∈N
is equivalent to {φ
n
}
n∈N
, that is, such that
there exist unimodular scalars {z
n
}
n∈N
such that φ
σ(n
1
)
, φ
σ(n
2
)
= z
n
1
z
n
2
φ
n
1
, φ
n
2
when n
1
̸= n
2
. We call the set of all such symmetries the symmetry group Sym(Φ) of
Φ, not to be confused with the symmetric group Sym(N) of N.
Such a symmetry group is indeed a group. To see this, note
φ
id(n
1
)
, φ
id(n
2
)
= φ
n
1
, φ
n
2
,
29
so the identity permutation is in Sym(Φ) and given any σ, σ
Sym(Φ),
φ
σ
[σ(n
1
)]
, φ
σ
[σ(n
2
)]
= z
n
1
z
n
2
φ
σ(n
1
)
, φ
σ(n
2
)
= z
n
1
z
n
2
z
n
1
z
n
2
φ
n
1
, φ
n
2
= z
n
1
z
n
1
z
n
2
z
n
2
φ
n
1
, φ
n
2
,
and so σσ
Sym(Φ). And given any σ Sym(Φ),
φ
σ
1
(n
1
)
, φ
σ
1
(n
2
)
= z
1
n
1
z
1
n
2
φ
n
1
, φ
n
2
,
so σ
1
Sym(Φ). So Sym(Φ) contains the identity and is closed under composition
and inverses, so it is a group.
One useful property of Sym(Φ) is that by (6) we see that Sym(Φ) = Sym(Ψ)
where Ψ is the Naimark complement of Φ. From [21] we see that σ Sym(Φ) does
not affect triple products, and thus as an almost immediate consequence of (12), σ
permutes any regular simplex that is contained in an ETF to another such regular
simplex, as we will now show.
Lemma 3.2.1. Let Φ be an ETF, let σ Sym(Φ), and let K bin(Φ). Then
σ(K) := {σ(n) : n K} is also a member of bin(Φ).
Proof. Since σ is a member of the symmetry group, there exists some sequence of uni-
modular scalars {z
n
}
n∈N
such that φ
σ(n
1
)
, φ
σ(n
2
)
= z
n
1
z
n
2
φ
n
1
, φ
n
2
for all distinct
n
1
, n
2
N. So the triple product of any distinct φ
σ(n
1
)
, φ
σ(n
2
)
, φ
σ(n
3
)
{φ
n
}
nσ(K)
is:
φ
σ(n
1
)
, φ
σ(n
2
)
⟩⟨φ
σ(n
2
)
, φ
σ(n
3
)
⟩⟨φ
σ(n
3
)
, φ
σ(n
1
)
= z
n
1
z
n
1
z
n
2
z
n
2
z
n
3
z
n
3
φ
n
1
, φ
n
2
⟩⟨φ
n
2
, φ
n
3
⟩⟨φ
n
3
, φ
n
1
= φ
n
1
, φ
n
2
⟩⟨φ
n
2
, φ
n
3
⟩⟨φ
n
3
, φ
n
1
.
30
So σ preserves triple products, and |{φ
n
}
n∈K
| = |{φ
n
}
nσ(K)
| because σ is a permu-
tation. Combining this with (12) gives that if K bin(Φ), then σ(K) bin(Φ).
3.3 Doubly Transitive Equiangular Tight Frames
For any sequence of equal norm vectors {φ
n
}
n∈N
, we say that Φ is doubly tran-
sitive if the natural action of Sym(Φ) on N is doubly transitive, that is, given any
n
1
, n
2
, n
3
, n
4
N where n
1
̸= n
2
and n
3
̸= n
4
, there exists a permutation σ Sym(Φ)
such that σ(n
1
) = n
3
and σ(n
2
) = n
4
. As we now explain, it was recently observed
that if Φ is doubly transitive and linearly dependent then it is necessarily an ETF.
However, if we know that dim(span({φ
n
}
n∈N
)) < N, then any such doubly transitive
Φ must be an ETF. This is proven in Lemma 1.1 of [20], but we provide a condensed
alternative proof here.
Lemma 3.3.1 (Lemma 1.1 of [20]). Let Φ be a finite sequence of equal norm vectors
{φ
n
}
n∈N
in a Hilbert space. Then if Φ is linearly dependent and the natural action
of its symmetry group Sym(Φ) is doubly transitive, then Φ is an ETF.
Proof. Since Φ is linearly dependent, D := dim(span({φ
n
}
n∈N
)) < N. Consider the
matrix A F
N ×N
defined by
A(n
1
, n
3
) :=
X
n
2
∈N
φ
n
1
, φ
n
2
⟩⟨φ
n
2
, φ
n
3
⟩⟨φ
n
3
, φ
n
1
= (Φ
Φ)
2
(n
1
, n
3
)(Φ
Φ)(n
1
, n
3
).
Since we assume that Φ is doubly transitive, for any n
1
, n
3
, n
4
, n
6
N with n
1
̸= n
3
and n
4
̸= n
6
, there exists σ Sym(Φ) such that σ(n
1
) = n
4
and σ(n
3
) = n
6
.
Since σ Sym(Φ), there exists a sequence of unimodular scalars (z
n
)
n∈N
such that
φ
σ(n
7
)
, φ
σ(n
8
)
=
z
n
7
z
n
8
φ
n
7
, ϕ
n
8
for all n
7
, n
8
N. In particular, |⟨φ
n
4
, φ
n
6
⟩| =
|⟨φ
n
1
, φ
n
3
⟩| for all such n
1
, n
3
, n
4
, n
6
. This implies that there exists C 0 such that
|⟨φ
n
1
, φ
n
3
⟩| = C when n
1
̸= n
3
.
31
Since N > D, these vectors cannot be mutually orthogonal, and so C > 0. Then
we without loss of generality scale Φ so that C = 1. Similarly, (15) implies that
A(n
4
, n
6
) = A(n
1
, n
3
) for all such n
1
, n
3
, n
4
, n
6
, and so there exists B F such that
A(n
1
, n
3
) = B when n
1
̸= n
3
. Multiplying our equation for A by
Φ)(n
1
, n
3
)
thus gives
Φ)
2
(n
1
, n
3
) = B
Φ)(n
1
, n
3
) when n
1
̸= n
3
. When instead n
1
= n
3
,
Φ)
2
(n
1
, n
3
) = S
2
+ (N 1) while
Φ)(n
1
, n
3
) = S. Altogether,
Φ)
2
=
B
Φ)+ [S(S B) + (N 1)]I. That said, ker(Φ
Φ) contains a nonzero vector since
N > D, and applying this above equation to it gives S(S B) + (N 1) = 0. Thus,
Φ)
2
= B
Φ), implying Φ is a tight frame and so also an ETF.
So then we see that this idea of double transitivity can guarantee that Φ is an
ETF, which we call a doubly transitive ETF (DTETF). [21] provides many types of
DTETFs. Since an ETF Φ and its Naimark complement share a symmetry group,
having either one be a DTETF implies that the other is as well. We now prove the
first main result of this thesis.
Theorem 3.3.2. If Φ := {φ
n
}
n∈N
is a DTETF whose binder bin(Φ) is nonempty,
then (N, bin(Φ)) is a BIBD.
Proof. Take any n
1
, n
2
, n
3
, n
4
N where n
1
̸= n
2
and n
3
̸= n
4
. Let S
1,2
be the set of
all members K of bin(Φ) containing n
1
and n
2
, and let S
3,4
be the set of all members
K
of bin(Φ) containing n
3
and n
4
. Since Φ is a DTETF, there exists σ Sym(Φ)
such that σ(n
1
) = n
3
and σ(n
2
) = n
4
. Note that for any K S
1,2
, by Lemma 3.2.1,
σ(K) is a member of bin(Φ), and so σ(K) S
3,4
.
This transformation from S
1,2
into S
3,4
is one-to-one: since σ is a finite permuta-
tion, it is invertible; so if there exists K
1
, K
2
S
1,2
such that σ(K
1
) = K = σ(K
2
),
then K
1
= σ
1
(K) = K
2
. Therefore, |S
1,2
| |S
3,4
|. By symmetry, |S
1,2
| = |S
3,4
|.
Since bin(Φ) is non-empty, there exists some choice of n
1
, n
2
N such that S
n
1
,n
2
is nonempty. Therefore, they are all nonempty and of the same size. So (N, bin(Φ))
32
forms a BIBD.
This result applies at this level of generality to every DTETF. We do not need
to know a DTETF’s parameters, what field it lies over, or have a formula for its
Gram matrix in order to apply this result. That said, for any given DTETF, it would
be helpful to know those things. Now that we know that bin(Φ) is either empty
or forms a BIBD, if we can find a single regular simplex in Φ and explain how it is
constructed, we may eventually be able to leverage this to calculate the size of bin(Φ).
This would prove that Φ achieves equality in its spark bound – which is bad from the
perspective of compressed sensing but would allow us to calculate the probability
that a randomly chosen subsequence Φ
K
of Φ with size K = spark(Φ) = S + 1 is
linearly dependent:
|bin(Φ)|
N
S+1
.
In a compressed sensing context, this could help to answer the question “How likely
are we to lose information when the signal is not sufficiently sparse?”
Because families of DTETFs arise from different methods of construction, it seems
unlikely that there is a general method for counting the regular simplices within any
given DTETF. However, if we have a thorough understanding of the Gram matrix
of a particular family of DTETFs, we can attempt to use the triple product char-
acterization of the binder given in (12) to count the regular simplices within those
DTETFs in particular. As a proof of concept of this idea, we in the next chapter
apply this theory to a particular infinite family of DTETFs, namely to the symplectic
ETFs and their Naimark complements.
33
IV. Symplectic Equiangular Tight Frames
In this chapter, we will define the symplectic ETF and its Naimark complement,
and characterize their binders. In Section 4.1, we discuss symplectic forms and sym-
plectic ETFs. In Section 4.2, we review the notion of a hyperbolic basis of a symplectic
space. In Section 4.3, we discuss the symplectic group and use it to prove that the
symplectic ETF and its Naimark complement are DTETFs. In Section 4.4, we fully
characterize the binder of the Naimark complement of the symplectic ETF; this is
given in Theorem 4.4.3, our second main result. Our third main result (Theorem 4.5.2)
is given in Section 4.5; in it we demonstrate that the binder of the symplectic ETF
is empty in all but two small cases.
4.1 Symplectic Forms and Symplectic Equiangular Tight Frames
Given a finite-dimensional vector space V over a field F, a bilinear form is a
mapping B : V × V F such that B(v
1
, v
2
) becomes a linear function of v
2
if v
1
is
fixed, and becomes a linear function of v
1
if v
2
is fixed. In particular, this means that
B(c
1
v
1
+ c
2
v
2
, c
3
v
3
+ c
4
v
4
)
= c
1
c
3
B(v
1
, v
3
) + c
1
c
4
B(v
1
, v
4
) + c
2
c
3
B(v
2
, v
3
) + c
2
c
4
B(v
2
, v
4
). (16)
The dot product is perhaps the most well-known example of a bilinear form, but we
can define our B to have additional properties depending on the type of space we
would like to have.
A bilinear form B is called alternating if B(v, v) = 0 for all v V, that is, if every
34
vector is self-orthogonal. When this occurs, for any v
1
, v
2
V, (16) gives
0 = B(v
1
+ v
2
, v
1
+ v
2
) = B(v
1
, v
2
) + B(v
2
, v
1
),
and so B(v
1
, v
2
) = B(v
2
, v
1
) A bilinear form B is called nondegenerate if for all
v
1
V with v
1
̸= 0, there exists v
2
V such that B(v
1
, v
2
) ̸= 0. Here, by scaling this
v
2
if necessary, we can always assume without loss of generality that B(v
1
, v
2
) = 1.
An alternating nondegenerate bilinear form is called a symplectic form.
For any field F, the canonical symplectic form on F
2J
is
B(x, y) =
J
X
i=1
(x
2i1
y
2i
x
2i
y
2i1
). (17)
As discussed below, any symplectic form on V is isomorphic to such a canonical form.
In the particular case where V is a finite vector space, that is, a finite-dimensional
vector space over a finite field, we can form an ETF from a symplectic form B on
V. Here, we assume that the underlying field F is the field F
p
of some prime order
p; if V is more generally a finite vector space over the field F
q
of prime power order
q, and B is some F
q
-valued symplectic form on V, it can be shown that tr
q/p
B is
an F
p
-valued symplectic form on V, where tr
q/p
is the field trace from F
q
to its base
field F
p
. Specifically given our symplectic form B, we can form an ETF based on the
matrix Γ C
VxV
defined by
Γ(v
1
, v
2
) := exp(
2πi
p
(B(v
1
, v
2
))).
The following result gives a number of facts about this matrix Γ:
Lemma 4.1.1 (Folklore). If B is a symplectic form on a finite-dimensional vector
space V over F
p
where p is prime, then Γ C
V×V
, Γ(v
1
, v
2
) := exp(
2πi
p
B(v
1
, v
2
)) is a
35
self-adjoint complex Hadamard matrix with Γ(v, v) = 1 for all v V. If such a form
exists, then the dimension of V over F
p
is necessarily even, that is, of the form 2J
for some integer J. Because of this, p
J
I + Γ and p
J
I Γ are the Gram matrices of
Naimark complementary p
2J
-vector ETFs for spaces of dimension
1
2
p
J
(p
J
+ 1) and
1
2
p
J
(p
J
1) respectively.
Proof. For any v V,
Γ(v, v) = exp(
2πi
p
B(v, v)) = exp(0) = 1.
To see that Γ
= Γ, note that for any v
1
, v
2
V,
Γ
(v
1
, v
2
) = Γ(v
2
, v
1
)
= exp(
2πi
p
B(v
2
, v
1
))
= exp(
2πi
p
[B(v
2
, v
1
)])
= exp(
2πi
p
B(v
1
, v
2
))
= Γ(v
1
, v
2
).
We next show that Γ is a complex Hadamard matrix. It has been constructed to have
unimodular entries, and its Gram matrix is given by
Γ)(v
1
, v
3
) =
X
v
2
∈V
Γ
(v
1
, v
2
)Γ(v
2
, v
3
)
=
X
v
2
∈V
Γ(v
2
, v
1
)Γ(v
2
, v
3
)
=
X
v
2
∈V
exp(
2πi
p
B(v
2
, v
1
)) exp(
2πi
p
B(v
2
, v
3
))
=
X
v
2
∈V
exp(
2πi
p
B(v
2
, v
3
v
1
)).
36
If v
1
= v
3
then clearly this equals V := |V|. But if v
1
̸= v
3
, then because B is
nondegenerate there exists v
4
V such that B(v
4
, v
3
v
1
) = 1. So
exp(
2πi
p
)(Γ
Γ)(v
1
, v
3
) = exp(
2πi
p
B(v
4
, v
3
v
1
))
X
v
2
∈V
exp(
2πi
p
B(v
2
, v
3
v
1
))
=
X
v
2
∈V
exp(
2πi
p
B(v
2
+ v
4
, v
3
v
1
))
=
X
v
5
∈V
exp(
2πi
p
B(v
5
, v
3
v
1
))
= (Γ
Γ)(v
1
, v
3
).
Therefore [exp(
2πi
p
)1](Γ
Γ)(v
1
, v
3
) = 0. This implies that
Γ)(v
1
, v
3
) = 0 whenever
v
1
̸= v
3
. So Γ
Γ = V I, and thus Γ is indeed a complex Hadamard matrix.
Because Γ is self-adjoint, Γ
2
= Γ
Γ = V I, and so any eigenvalue of Γ is either
V or
V . Because Γ is a V × V matrix, the multiplicities a and b of
V and
V respectively satisfy a + b = V . Also, since every diagonal entry of Γ has value
1, we moreover have V = tr(Γ) = a
V b
V . Combining this with a + b = V gives
a =
1
2
V (
V + 1) and b =
1
2
V (
V 1). Since a and b are integers,
V = a b
is an integer as well. So V is a perfect square. At the same time, V is a vector space
over F
p
, so V = p
dim(V)
. Taking these together gives dim(V) = 2J where J is an
integer. So V is of even dimension.
Now consider the matrix p
J
I +Γ. Its eigenvalues are 2p
J
and 0 with multiplicities
1
2
p
J
(p
J
+ 1) and
1
2
p
J
(p
J
1) respectively. It is thus a positive multiple of a projection
and so is the Gram matrix Φ
Φ of some tight frame Φ = {φ
v
}
v∈V
that spans a space
of dimension
1
2
p
J
(p
J
+ 1). This tight frame is equal norm with φ
v
2
= (Φ
Φ)(v, v) =
(p
J
I + Γ)(v, v) = p
J
+ 1 for all v V. It is moreover an ETF since whenever v
1
̸= v
2
,
φ
v
1
, φ
v
2
= (Φ
Φ)(v
1
, v
2
) = (p
J
I + Γ)(v
1
, v
2
) = exp(
2πi
p
B(v
1
, v
2
)).
37
It follows that p
J
I Γ is the Gram matrix Ψ
Ψ of the Naimark complement Ψ of Φ,
which has
ψ
v
1
, ψ
v
2
= (Ψ
Ψ)(v
1
, v
2
) = (p
J
I Γ)(v
1
, v
2
) =
p
J
1, v
1
= v
2
,
exp(
2πi
p
B(v
1
, v
2
)), v
1
̸= v
2
,
We now give names to the ETFs of Lemma 4.1.1.
Definition 4.1.2. Let p be a prime, let B be a symplectic form on a vector space
V over F
p
of dimension 2J and let F = R if p = 2 and F = C if p > 2. A sequence
Φ = {φ
v
}
v∈V
of vectors in a Hilbert space over F is the corresponding symplectic ETF
if, for any v
1
, v
2
V,
φ
v
1
, φ
v
2
=
p
J
+ 1, v
1
= v
2
,
exp(
2πi
p
B(v
1
, v
2
)), v
1
̸= v
2
.
(18)
Such a Φ exists for any such B, and any such Φ is an N = P
2J
-vector ETF for a
space of dimension D =
1
2
p
J
(p
J
+ 1). A sequence Ψ = (ψ
v
)
v∈V
is thus its Naimark
complement if, for any v
1
, v
2
V,
ψ
v
1
, ψ
v
2
=
p
J
1, v
1
= v
2
,
exp(
2πi
p
B(v
1
, v
2
)), v
1
̸= v
2
.
(19)
Any such Ψ is an N-vector ETF for a space of dimension N D =
1
2
p
J
(p
J
1).
Example 4.1.3. As an example, we take V to be the space over F
2
of dimension 4,
that is when J = 2. We equip this space with the canonical symplectic form (17).
We now use B to construct Γ as in Lemma 4.1.1. We represent Γ visually by using
“+” to represent 1 and to represent 1.
38
Γ =
+ + + + + + + + + + + + + + + +
+ + + + + + + +
+ + + + + + + +
+ + + + + + + +
+ + + + + + + +
+ + + + + + + +
+ + + + + + + +
+ + + + + + + +
+ + + + + + + +
+ + + + + + + +
+ + + + + + + +
+ + + + + + + +
+ + + + + + + +
+ + + + + + + +
+ + + + + + + +
+ + + + + + + +
(0000)
(0001)
(0010)
(0011)
(0100)
(0101)
(0110)
(0111)
(1000)
(1001)
(1010)
(1011)
(1100)
(1101)
(1110)
(1111)
.
In this example, we can see that Γ has the properties we expect. We can see that
each of its entries is either 1 or 1, it is symmetric, and each of its diagonal entries
is 1. One can also verify that any two columns of Γ are orthogonal, and so Γ is a
Hadamard matrix.
As discussed in the proof of Lemma 4.1.1, the eigenvalues of Γ are 4 with multi-
plicity 10 and 4 with multiplicity 6. This means that 4I + Γ is the Gram matrix
of the symplectic ETF Φ of 16 vectors in a 10-dimensional space, and 4I Γ is the
Gram matrix of its Naimark complement Ψ which is an ETF of 16 vectors in a 6-
dimensional space. Below, we calculate the binders of Φ and Ψ, both in general and
for this particular example. In order to do so, we first demonstrate that they are
DTETFs.
4.2 Hyperbolic Bases
In order to understand a vector space, it is useful to consider a basis for that
space. In inner product spaces, we tend to prefer to have an orthogonal basis, that is
a basis where every basis vector is orthogonal to every other basis vector. However,
under a symplectic form, an orthogonal basis for V does not exist: since every vector
39
is orthogonal to itself but the form is nondegenerate, no basis vector can possibly
be orthogonal to all other basis members. In this section, we will introduce the
similar notion of a hyperbolic basis, which is also a basis defined by its pairwise inner
products.
Given any set of vectors U in a symplectic space V, we say that the orthogonal
complement of U, denoted U
, is the set of all v V such that B(u, v) = 0 for
all u U. Note that U
is a subspace of V, because clearly 0 U
and also, if
v
1
, v
2
U
and c
1
, c
2
F, then since for any u U,
B(c
1
v
1
+ c
2
v
2
, u) = c
1
B(v
1
, u) + c
2
B(v
2
, u) = c
1
0 + c
2
0 = 0.
Additionally, the orthogonal complement U
of any such set of vectors is equal
to the orthogonal complement span(U)
of the subspace of V that is spanned by
those vectors: clearly, if B(u, v) = 0 for all u U, then v will also be orthogonal
to any linear combination of vectors in U because of bilinearity; conversely, if v is
orthogonal to any linear combination of vectors in U, then it must be orthogonal to
any individual vector within that span, and thus to any set of vectors within that
span. It is also well known see [15] for example that for any subset U of V,
dim(span(U)) + dim(U
) = dim(V), (U
)
= span(U). (20)
These fact are remarkable since under a symplectic form B, orthogonality is no
longer a condition that implies linear independence. But interestingly, lack of orthog-
onality now is, at least when referring to a pair of vectors: if B(v
1
, v
2
) ̸= 0, then
v
1
/ span(v
2
) and v
2
/ span(v
1
). In particular, if B(v
1
, v
2
) = 1, we call (v
1
, v
2
) a
hyperbolic pair. Hyperbolic pairs provide an intuitive, iterative construction for bases
within a symplectic space, as we now show.
40
Lemma 4.2.1 (Folklore). If B is a symplectic form on a finite-dimensional vector
space V over a field F, then its dimension is even, and writing it as 2J for some integer
J, then there exists a hyperbolic basis for V, that is, an ordered sequence {b
2i1
, b
2i
}
J
i=1
such that B(b
i
1
, b
i
2
) = 1 if (i
1
, i
2
) = (2j 1, 2j) for some j, 1 if (i
1
, i
2
) = (2j, 2j 1)
for some j, and 0 otherwise. Any such hyperbolic basis is necessarily a basis.
Proof. Let us construct B = {b
2i1
, b
2i
}
J
i=1
, a hyperbolic basis for a symplectic space
V of dimension 2J. Take any nonzero v
1
V and let it be b
1
. Because B is nonde-
generate, there exists some v
2
V such that B(v
1
, v
2
) = 1. Let this v
2
be b
2
. Clearly
v
2
/ span({v
1
}), and so dim(span({b
1
, b
2
})) = 2.
Now consider at V
0
:= span({b
1
, b
2
})
and let B
0
be the restriction of B to V
0
,
that is, B
0
(v
1
, v
2
) := B(v
1
, v
2
) for all v
1
, v
2
V
0
. Note that by (20), dim(V
0
) = 2J 2.
Moreover, span({b
1
, b
2
}) V
0
= {0}: if v span({b
1
, b
2
}) V
0
then v span({b
1
, b
2
})
and so there exists c
1
, c
2
F such that v = c
1
b
1
+ c
2
b
2
; since v V
0
, 0 = B(v, b
2
) = c
1
and 0 = B(b
1
, v) = c
2
and so v = c
1
v
1
+ c
2
v
2
= 0. This means that for any v V,
there exists unique c
1
, c
2
F and v
0
V
0
such that v = c
1
b
1
+ c
2
b
2
+ v
0
.
Clearly B
0
is a symplectic form on V
0
: B
0
is alternating since B
0
(v, v) = B(v, v) =
0 for all v V
0
; moreover, B
0
is nondegenerate because if v
V
satisfies B
0
(v
0
, v
) =
0 for all v
0
V
0
, then for all v V, B(v, v
) = B(c
1
b
1
+ c
2
b
2
+ v
0
, v
) = 0 and so
v
= 0. Here, note that V
0
can never have dimension 1, since any alternating form on
a 1-dimensional space is degenerate. In particular, the dimension of V is necessarily
even; we say 2J = dim(V) for some integer J.
So V
0
is itself a symplectic space of dimension 2J 2. If V
0
is not {0}, then we
iterate this process until V
0
= {0}; in this way, we choose precisely 2J vectors to
construct B. By the way we have constructed B, it must be a hyperbolic basis.
We next show that any hyperbolic basis is a basis for V. To prove B is a basis,
it suffices to show its linear independence. Take constants {c
k
}
2J
k=0
F such that
41
0 =
P
J
i=1
(c
2i1
b
2i1
+ c
2i
b
2i
). Then 0 = B(b
2i1
, 0) = c
2i
and 0 = B(0, b
2i
) = c
2i1
for
all i.
So hyperbolic bases exist, and they are bases. In particular, because of the way we
have defined them as an ordered sequence dependent on B, they play much the same
role as an orthonormal basis does in an inner product space. For example, much like
a finite-dimensional inner product space has an orthogonal group of linear operators
that are characterized by the fact that they map any orthonormal basis to another,
we now consider the analogous concept for hyperbolic bases.
4.3 Symplectic Groups
For any symplectic form B on a vector space V of dimension 2J, the symplectic
group Sp(B) is the set of linear operators on V which preserve B, namely:
Sp(B) := {A : V V | A is linear, B(Av
1
, Av
2
) = B(v
1
, v
2
) v
1
, v
2
V}.
Lemma 4.3.1 (Folklore). Let B be a symplectic form on a finite-dimensional vector
space V over a field F. Then a linear operator A maps a hyperbolic basis to a hyperbolic
basis if and only if A Sp(B). Moreover, any hyperbolic basis can be mapped to
any other hyperbolic basis by a member of Sp(B). The symplectic group Sp(B) is a
subgroup of the general linear group (of all invertible linear operators) on V. It acts
transitively on V\{0}, that is for any nonzero v
1
, v
2
V, there exists A Sp(B) such
that A(v
1
) = v
2
.
Proof. If A Sp(B) and B is a hyperbolic basis for V, then A(B) is also a hyperbolic
basis for V: for any b
i
1
, b
i
2
B, we have B(Ab
i
1
, Ab
i
2
) = B(b
i
1
, b
i
2
), and any pair
that was orthogonal in B is orthogonal in A(B) and any hyperbolic pair in B is a
hyperbolic pair in A(B).
42
Conversely, as we now explain, if A is a linear operator on V and A maps some
hyperbolic basis B for V to another such hyperbolic basis A(B), then A Sp(B).
Take any v
1
, v
2
V. Since B and A(B) are hyperbolic bases we know that for any
b
i
1
, b
i
2
B, B(b
i
1
, b
i
2
) = B(Ab
i
1
, Ab
i
2
). We just need to show that the analogous
property holds for a general u, v V. And since B is a basis, this follows by writing
u =
P
2J
m=1
c
m
b
m
and v =
P
2J
n=1
d
n
b
n
as linear combinations of these basis elements
B(Au, Av) = B(A
2J
X
m=1
c
m
b
m
, A
2J
X
n=1
d
n
b
n
) =
2J
X
m=1
2J
X
n=1
c
m
d
n
B(Ab
m
, Ab
n
),
B(u, v) = B(
2J
X
m=1
c
m
b
m
,
2J
X
n=1
d
n
b
n
) =
2J
X
m=1
2J
X
n=1
c
m
d
n
B(b
m
, b
n
),
In particular, since V has a hyperbolic basis, and since any hyperbolic basis for V is
necessarily a basis for it, any A Sp(B) maps a particular basis for V to another
basis for it. Thus, any A Sp(B) is necessarily invertible, and so Sp(B) is a subset
of the general linear group on V.
To see that Sp(B) is a subgroup of the general linear group, note that I Sp(B)
since B(Iv
1
, Iv
2
) = B(v
1
, v
2
) for all v
1
, v
2
V. If A
1
, A
2
Sp(B), then A
1
A
2
Sp(B)
since for all v
1
, v
2
V,
B(A
1
A
2
v
1
, A
1
A
2
v
2
) = B(A
2
v
1
, A
2
v
2
) = B(v
1
, v
2
).
And if A Sp(B), then A
1
Sp(B) since for all v
1
, v
2
V
B(v
1
, v
2
) = B(AA
1
v
1
, AA
1
v
2
) = B(A
1
v
1
, A
1
v
2
).
So Sp(B) is a subgroup of the general linear group on V.
To see that the action (A, v) 7→ Av of Sp(B) on V\{0} is transitive, take any
nonzero v, v
V and construct a hyperbolic basis B whose first basis vector is v and
43
a second hyperbolic basis B
whose first basis vector is v
. Since any hyperbolic basis
is a basis, there is a unique invertible linear operator A on V that maps the former
basis to the latter, and so Av = v
. Since both of these bases are hyperbolic bases,
A Sp(B).
From here it follows that the symplectic ETF Φ is a DTETF.
Lemma 4.3.2 ([20]). The symplectic ETF (18) and its Naimark complement (19)
are DTETFs.
Proof. Let Φ be the symplectic ETF derived from the symplectic form B on the vector
space V of dimension 2J as defined in (18), and let Ψ be its Naimark complement.
Then if we take v
1
, v
2
, v
3
, v
4
V such that v
1
̸= v
2
and v
3
̸= v
4
, the fact that
Sp(B) acts transitively on V\{0} gives A Sp(B) such that A(v
1
v
2
) = v
3
v
4
.
So then if we define f : V V as f(v) := A(v v
2
) + v
4
= Av Av
2
+ v
4
, we see
that f(v
1
) = v
3
and f(v
2
) = v
4
. Since f is a composition of a shift and a symplectic
operator, both of which are invertible, f is invertible and is thus a permutation of V,
namely a permutation on Φ’s indexing set.
To show that Φ is a DTETF, it suffices to show that every such permutation is
a symmetry of Φ. That is, for any A Sp(B) and v V, there exists a sequence
of unimodular scalars {z
v
}
v∈V
such that φ
Av
1
+v
, φ
Av
2
+v
= z
v
1
z
v
2
φ
v
1
, φ
v
2
for all
v
1
, v
2
V. Here defining z
v
= exp(
2πi
p
B(v, Av
)) for all v V,
φ
A(v
1
)+v
, φ
A(v
2
)+v
= exp(
2πi
p
B(Av
1
+ v, Av
2
+ v))
= exp(
2πi
p
[B(Av
1
, v) + B(v, Av
2
) + B(Av
1
, Av
2
)])
= exp(
2πi
p
[B(v, Av
1
]) exp(
2πi
p
[B(v, Av
2
]) exp(
2πi
p
[B(Av
2
, Av
1
)])
= z
v
1
z
v
2
exp(
2πi
p
[B(v
1
, v
2
)])
= z
v
1
z
v
2
φ
v
1
, φ
v
2
.
44
Therefore Φ is a DTETF. It is now immediate that its Naimark complement Ψ is
as well.
So by Theorem 3.3.2, bin(Φ) is either empty or forms a BIBD, and bin(Ψ) is
also either empty or forms a BIBD. These two cases turn out to be different in a
meaningful way, and so we will separate them into different sections. We begin by
discussing the Naimark complement Ψ.
4.4 The Binder of the Naimark Complement of the Symplectic Equian-
gular Tight Frame
Since any vector v V is a member of its own orthogonal complement, it is
orthogonal to anything in its one-dimensional span. In fact, any two vectors in
v := {cv : c F} are orthogonal. This is an example of what is known as a
totally orthogonal subspace. More generally, a subset S of V is a totally orthogonal
subset if B(v
1
, v
2
) = 0 for any v
1
, v
2
S V. By (20), the dimension of a totally
orthogonal subspace cannot exceed J: if S is totally orthogonal, then S S
and so
dim(S) dim(S
), implying
2J = dim(V) = dim(S) + dim(S
) 2 dim(S).
We now detail how to find totally orthogonal subspaces of this maximal dimension
J, as their existence and composition will be fundamental to later results. In gen-
eral, if S is any totally orthogonal subset of V, then span(S) is a totally orthogonal
subspace; to see this, write any two members of span(S) as linear combinations of
members of S and then distribute B. Now, for V of dimension 2J, take any hyperbolic
basis B = {b
2i1
, b
2i
}
J
i=1
for V and recall that each pair (b
2i1
, b
2i
) is a hyperbolic pair.
Let us take precisely one representative from each pair without loss of generality,
45
the first of each pair to create a set B
0
. Clearly this is a set of J linearly indepen-
dent orthogonal vectors, and thus their span is a totally orthogonal subspace of V of
maximum dimension J. This is known as a Lagrangian subspace of V. To be precise,
in light of (20), a subspace L of V is Lagrangian if dim(L) = J, or equivalently if
L = L
. We now count these subspaces for later use.
Lemma 4.4.1 (Folklore). Let B be a symplectic form on a vector space V over F
p
of
dimension 2J. The number of Lagrangian subspaces of V is
Q
J
i=1
(p
i
+ 1).
Proof. We claim that an ordered set of nonzero vectors S = {b
i
}
J
i=1
, S is a basis for a
Lagrangian subspace of V if and only if b
k
({b
i
}
k1
i=1
)
but b
k
/ span({b
i
}
k1
i=1
) for all
k {1, . . . , J}. One direction is obvious: if S is a basis for a Lagrangian subspace,
then for all b
k
S, b
k
/ span(S\{b
k
}), and B(b
k
, b
j
) = 0 for all b
k
, b
j
S where
b
k
̸= b
j
.
Now assume for each b
k
S, b
k
({b
i
}
k1
i=1
)
but b
k
/ span({b
i
}
k1
i=1
). Then for any
b
k
, b
j
S where b
k
̸= b
j
, without loss of generality assume k < j. So B(b
k
, b
j
) = 0
and because B is symplectic, B(b
j
, b
k
) = 0. So S is totally orthogonal. Moreover from
linear algebra, we know {b
i
}
J
i=1
is linearly dependent if and only if b
k
/ span({b
i
}
k1
i=1
for all k {1, 2, . . . J}. So S is a totally orthogonal set of linearly independent vectors
with cardinality J, that is, a basis for a Lagrangian subspace. With this, we can count
the number of Lagrangian subspaces.
We start by counting all bases for all Lagrangian subspaces. There are (p
2J
1)
choices for b
1
, as it can be anything but 0. For b
2
, there are
|span({b
1
})
| |span({b
1
})| = p
2J1
p
such choices. In general, for b
k
, there are
|span({b
1
, . . . b
k1
})
| |span({b
1
. . . b
k1
})| = p
2J(k1)
p
k1
46
choices. The total number of ordered bases for a Lagrangian subspace of V is thus
Q
J1
i=0
(p
2Ji
p
i
).
To count the actual number of Lagrangian subspaces of V, we now divide this
number by the number of bases that span a specific Lagrangian subspace. Given
this subspace L, for b
1
we have (p
J
1) choices: any non-zero vector in L. For
b
2
we have |L| span({b
1
}) = (p
J
2) choices. Generally, for b
k
we have |L|
span({b
1
, . . . b
k1
}) = p
J
p
k1
choices. So there are
Q
J1
i=0
(p
J
p
i
) ordered bases for
L. Overall, the total number of Lagrangian subspaces of V is
J1
Y
i=0
p
2Ji
p
i
p
J
p
i
=
J1
Y
i=0
p
2J2i
1
p
Ji
1
.
Making the change of variables j = J i, this simplifies to
J1
Y
i=0
p
2J2i
1
p
Ji
1
=
J
Y
j=1
p
2j
1
p
j
1
=
J
Y
j=1
(p
j
+ 1).
Example 4.4.2. For example, when p = 2 and J = 2 there are exactly 15 Lagrangian
subspaces. Represented using the canonical symplectic form (17), they are
{0000, 0010, 1000, 1010}, {0000, 0001, 1000, 1001}, {0000, 0011, 1000, 1011},
{0000, 0011, 1100, 1111}, {0000, 0001, 1100, 1101}, {0000, 0101, 1010, 1111},
{0000, 0111, 1001, 1110}, {0000, 0010, 0100, 0110}, {0000, 0101, 1011, 1110},
{0000, 0110, 1011, 1101}, {0000, 0010, 1100, 1110}, {0000, 0110, 1001, 1111},
{0000, 0001, 0100, 0101}, {0000, 0011, 0100, 0111}, {0000, 0111, 1010, 1101}.
If we take any Lagrangian subspace L and any v V, we can shift L by v:
v + L = {v + v
: v
L}. We call this an Affine Lagrangian Subspace (ALS). We
will now prove our second main result of the thesis, namely that the binder of the
47
Naimark complement Ψ of the symplectic ETF Φ consists precisely of all ALSs in V,
and therefore the set of all such ALSs forms a BIBD.
Theorem 4.4.3. Let B be a symplectic form on a vector space V over F
p
of dimension
2J and let Ψ = {ψ
v
}
v∈V
be the Naimark complement of the corresponding symplectic
ETF (19). Then bin(Ψ) consists of all affine Lagrangian subspaces of V, namely all
sets of the form v+L where v V and L = L
, and it forms a BIBD with parameters:
V = p
2J
, K = p
J
, Λ =
J1
Y
i=1
(p
i
+ 1),
R =
J
Y
i=1
(p
i
+ 1), B = p
J
J
Y
i=1
(p
i
+ 1).
Proof. Let {φ
v
}
v∈V
be a symplectic ETF and let {ψ
v
}
v∈V
be its Naimark complement.
Applying (12) where “Φ” is Ψ gives that a subsequence Ψ
K
= {ψ
v
}
v∈K
is a regular
simplex, that is that K bin(Ψ), if and only if |K| = (p
J
1) + 1 = p
J
and, for all
pairwise distinct v
1
, v
2
, v
3
K,
1 = ψ
v
1
, ψ
v
2
⟩⟨ψ
v
2
, ψ
v
3
⟩⟨ψ
v
3
, ψ
v
1
= [exp(
2πi
p
B(v
1
, v
2
))][exp(
2πi
p
B(v
2
, v
3
))][exp(
2πi
p
B(v
3
, v
1
))]
= exp(
2πi
p
[B(v
1
, v
2
) + B(v
2
, v
3
) + B(v
3
, v
1
)]),
or equivalently that
0 = B(v
1
, v
2
) + B(v
2
, v
3
) + B(v
3
, v
1
) = B(v
1
v
3
, v
2
v
3
) (21)
for all such v
1
, v
2
, v
3
V. Note that these conditions are translation invariant: a
given subset K of V satisfies these properties if and only if v + K does as well for any
v V. In particular, for any K bin(Ψ), taking any v K we have that L := v +K
48
satisfies (21) and contains 0. Letting v
3
= 0 in (21) gives 0 = B(v
1
, v
2
) for all distinct
nonzero v
1
, v
2
K. Since this is clearly also true if v
1
= v
2
, v
1
= 0, or v
2
= 0, we
thus have that L is totally orthogonal.
To summarize our argument up to this point, if K bin(Ψ) then there exists
v V and a totally orthogonal subset L of cardinality p
J
such that K = v + L.
Now note that since L is totally orthogonal, so is span(L), and |span(L)| |L| =
p
J
, so the dimension of span(L) is at least J. At the same time, since span(L) is
totally orthogonal, its dimension is at most J. Thus dim(span(L)) = J, implying
L = span(L) is itself a totally orthogonal subspace of V of dimension J, namely a
Lagrangian subspace of V.
Altogether, we see that any K bin(Ψ) is an ALS. Conversely, any Lagrangian
subspace L of V satisfies (21), having B(v
1
, v
2
) + B(v
2
, v
3
) + B(v
3
, v
1
) = 0 + 0 + 0 = 0
and has cardinality p
J
, implying any shift v + L does as well, and so every ALS
belongs to bin(Ψ). Having that bin(Ψ) is the set of all ALSs on V, we in particular
have that bin(Ψ) is nonempty. Since Ψ is a DTETF (Lemma 4.3.2), our main result
from Chapter 3 (Theorem 3.3.2) gives that bin(Ψ) forms a BIBD.
Let us discuss the parameters of this BIBD. Clearly, the number of vectors V of
our BIBD is |V| = p
2J
, and the size of any block K is the size of an ALS, which is
the same as the size of a Lagrangian subspace, which is p
J
.
Here R is the number of ALSs containing any given v V. Letting v = 0 in
particular gives that R is the number of Lagrangian subspaces. From Lemma 4.4.1
this is
Q
J
i=1
(p
j
+ 1).
Now that we have V , K, and R, we can calculate Λ directly:
Λ =
K 1
V 1
R =
p
J
1
p
2J
1
J
Y
i=1
(p
i
+ 1) =
1
p
J
+ 1
J
Y
i=1
(p
i
+ 1) =
J1
Y
i=1
(p
i
+ 1).
49
We can also calculate B directly:
B =
V R
K
=
p
2J
p
J
R = p
J
J
Y
i=1
(p
i
+ 1).
Remark 4.4.4. Theorem 4.4.3 provides formulae for B and Λ, but without any
sense of what they represent. It is worth noting that calculating the parameters of
this BIBD did rely on a counting argument, namely the one given for counting R in
Lemma 4.4.1. This implies that, when generalizing this theory to other families of
DTETFs, we should expect to need to count Λ, R, or B in order to determine these
three numbers. In order to guide our intuition in future cases, we now provide more
direct counting arguments for both B and Λ.
Recall that blocks here equate to ALSs. We can think of these as cosets of La-
grangian subspaces. Since every Lagrangian subspace is a subgroup of size p
J
in a
group of size p
2J
, it has p
J
cosets. Therefore, we just need to count the Lagrangian
subspaces as we did in Lemma 4.4.1 and multiply that result by p
J
, which gives us
the result in Theorem 4.4.3.
In general, Λ is the number of blocks containing any given pair of distinct vertices.
Since we already know that bin(Ψ) forms a BIBD, we can choose any two distinct
vertices that make it easy to count the number of ALSs that contain them. Choosing
these to be 0 and any nonzero v V, we simply count the number of Lagrangian
subspaces L containing v. Since we know v L, we can find this number by counting
the ways we can construct an ordered basis for a Lagrangian subspace that has v
as its first member, and then dividing it by the number of ordered bases for any
particular such subspace that similarly has v as its first member. Paralleling the
50
proof of Lemma 4.4.1 gives this number to be
Λ =
J1
Y
i=1
p
2Ji
p
i
p
J
p
i
=
J1
Y
i=1
p
2J2i
1
p
Ji
1
=
J1
Y
j=1
(p
j
+ 1).
Remark 4.4.5. Theorem 4.4.3 resolves some open problems of [4]. In particular, it
disproves Conjecture 5.12 of [4] as stated, but proves that is true given an appropriate
modification to the conjectured value of Λ. To explain, for any odd prime p and
positive integer J, [4] constructs a p
2J
-vector Gabor–Steiner ETF for a complex space
of dimension
1
2
p
J
(p
J
1). As mentioned in [20] and detailed in [15], this type of
Gabor-Steiner ETF is equivalent to the Naimark complement of a symplectic ETF
(18). Since the binder is preserved by equivalence, Theorem 4.4.3 proves that this
binder forms a BIBD with parameters (V, K, Λ) = (p
2J
, p
J
,
Q
J1
i=1
(p
i
+ 1)).
In particular, when J = 1 and J = 2, bin(Ψ) forms a BIBD with (V, K, Λ)
being (p
2
, p, 1) and (p
4
, p
2
, p + 1) respectively; this is shown in Theorems 5.6 and 5.8
of [4], respectively. Based on these cases, Conjecture 5.12 of [4] predicts that, for
any positive integer J, bin(Φ) forms a BIBD with (V, K, Λ) = (p
2J
, p
J
,
p
J
1
p1
). By
Theorem 4.4.3 we see that this conjecture is false for J 3 as the correct value of Λ
is
Q
J1
i=1
(p
i
+ 1). In particular, in general, we find that bin(Φ) forms a binder much
larger than predicted by [4].
We conclude this section with the first interesting example of a binder that arises
as a consequence of Theorem 4.4.3.
Example 4.4.6. From Example 4.1.3, recall that when p = 2 and J = 2 the Naimark
complement Ψ of the symplectic ETF Φ consists of 16 vectors that span a space of
dimension 6. From Theorem 4.4.3 we know that bin(Ψ) forms a BIBD, and we know
the parameters of that BIBD are (V, K, Λ, R, B) = (16, 4, 3, 15, 60). Theorem 4.4.3
moreover gives that bin(Φ) consists of all ALSs in V, of which there are 60. If we
51
scale and sign the incidence matrix of this BIBD as in (14), then we get a sparse
representation of the Naimark complement of Ψ, which is the symplectic ETF Φ.
We represent this phased incidence matrix in Figure 1 where “+” represents 1,
represents 1, and empty space represents 0.
Since Ψ has a nonempty binder, we know that it achieves equality in its spark
bound (2). This is bad from the perspective of compressed sensing, because it means
there exist linearly dependent subsequences {ψ
n
}
n∈K
of Ψ where K = S + 1. In this
example, this means there are certain ways in which four simultaneous transmissions
could be confused with 0. However, even in this case the chance of such confusion
is low. There are
16
4
= 1820 subsequences of Ψ of size 4 in this case, and only
60 of them are linearly dependent, namely the members of bin(Φ). This means, if a
subsequence of size 4 was chosen randomly from Ψ, there would only be approximately
a .03 chance that those vectors are linearly dependent.
In application, the chance for confusion is actually even lower. Even if four trans-
mitters with linearly dependent codewords are transmitting simultaneously, not all
transmissions will result in a received signal of 0. The phased incidence matrix of this
example shows us that the signing of the signal also matters; only a scalar multiple
of the combination shown in the corresponding row of the example will result in a
combination of 0. Moreover, in application there may also be a factor of power; even
when certain linear combinations of codewords are zero, this will often still not be
the case when the coefficients of the codewords vary, such as when the transmitters
are at different distances from the receiver.
52
Φ =
1
3
++ ++
++ ++
++ ++
+ + + +
+ + + +
+ + + +
+ ++ +
+ + + +
+ + + +
+ + + +
+ + + +
+ + + +
+ + + +
+ + + +
+ + + +
+ +
+ +
+ +
+ +
+ +
+ +
+ +
+ +
+ +
+ +
+ +
+ +
+ +
+ +
+ +
+ +
+ +
+ +
+ +
+ +
+ +
++
++
+ +
+ +
+ +
+ +
++ −−
++ −−
+ +
+ +
+ +
+ +
+ +
+ +
+ +
+ +
+ +
+ +
++ −−
+ +
+ +
+ +
+ +
+ +
Figure 1: Phased Incidence Matrix of the Binder of Ψ
4.5 The Binder of the Symplectic Equiangular Tight Frame
We now modify some of the techniques of the previous theorem to now instead
characterize the members of the binder of the symplectic ETF. Though the difference
in argument is slight, it has profound consequences, as we see further below.
Theorem 4.5.1. Let B be a symplectic form on a vector space V over F
p
of dimension
2J, and let Φ = {φ
v
}
v∈V
be the corresponding symplectic ETF. Then bin(Φ) is empty
if p > 2. Moreover when p = 2, bin(Φ) consists of sets of the form v + ({0} S)
where |S| = 2
J
+ 1 and B(v
1
, v
2
) = 1 for all distinct v
1
, v
2
S.
Proof. Consider the symplectic ETF Φ. In order for a subsequence {φ
v
}
v∈K
of Φ to
be a regular simplex, it must have the property that given any three distinct vectors
v
1
, v
2
, v
3
Φ
K
, their triple product φ
v
1
, φ
v
2
⟩⟨φ
v
2
, φ
v
3
⟩⟨φ
v
3
, φ
v
1
= 1 by (12).
53
Because we have an explicit formula for Φ’s Gram matrix Γ, we know that
φ
v
1
, φ
v
2
= Γ(v
1
, v
2
) = exp(
2πi
p
(B(v
1
, v
2
))). These are, by design, pth roots of unity,
which form a group under multiplication. Since 1 is not a pth root of unity for any
prime p ̸= 2, that means this system has no solution in those cases. Therefore it is
only possible for Φ
K
to be a regular simplex in the case where p = 2.
When p = 2, 18 gives that
φ
v
1
, φ
v
2
=
2
J
+ 1, v
1
= v
2
,
(1)
B(v
1
,v
2
)
, v
1
̸= v
2
.
Here (12) gives that a subset K of V belongs to bin(Φ) if and only if |K| = (2
J
+1)+1 =
2
J
+ 2 and
1 =φ
v
1
, φ
v
2
⟩⟨φ
v
2
, φ
v
3
⟩⟨φ
v
3
, φ
v
1
= (1)
B(v
1
,v
2
)+B(v
2
,v
3
)+B(v
3
,v
1
)
for all pairwise distinct v
1
, v
2
, v
3
K, namely if and only if for all such v
1
, v
2
, v
3
K
1 = B(v
1
, v
2
) + B(v
2
, v
3
) + B(v
3
, v
1
) = B(v
1
+ v
3
, v
2
+ v
3
).
In an argument similar to that used in Theorem 4.4.3, note that these conditions are
translation invariant and so we have that K satisfies them if and only if it is of the
form K = v + K
0
where K
0
satisfies these conditions and moreover has 0 K
0
.
Taking v
3
= 0 gives that B(v
1
, v
2
) = 1 for all nonzero distinct v
1
, v
2
K
0
. That
is, S := K
0
\{0} must be a totally nonorthogonal set. To summarize, bin(Φ) indeed
consists of sets of the form v + ({0}S) where |S| = 2
J
+ 1 and B(v
1
, v
2
) = 1 for all
distinct v
1
, v
2
V.
This brings us to the final main result of the thesis. We establish an upper bound
54
on the size of the type of sets S that arose in the previous result, and this implies
that bin(Φ) is empty even when p = 2 except in two small cases.
Theorem 4.5.2. Let B be a symplectic form of a vector space V over F
2
of dimension
2J, and let S be any subset of V such that B(v
1
, v
2
) = 1 for all v
1
, v
2
S where
v
1
̸= v
2
.
Then |S| 2J + 1. As a consequence, the binder of the corresponding symplectic
ETF Φ (18) is empty when J 3.
Proof. We argue by induction on J. For the base case, let J = 1. Then V is a set
of vectors of the form V = {0, v
1
, v
2
, v
1
+ v
2
} where (v
1
, v
2
) is a hyperbolic pair. If 0
S, then S = {0}, because for any nonzero v V, B(0, v) = 0, so v / S. If 0 / S,
then |S| |V {0}| = 3 = 2J + 1. In either case, |S| 2J + 1.
For the inductive argument, assume that for any symplectic space V
over F
2
of
dimension 2(J 1) = 2J 2, any subset S
with the property B(v
1
, v
2
) = 1 for all
v
1
, v
2
S
where v
1
̸= v
2
has cardinality at most |S
| 2(J 1) + 1 = 2J 1.
Then let S V have the property B(v
1
, v
2
) = 1 for all v
1
, v
2
S where v
1
̸= v
2
.
Take two such vectors v
1
, v
2
S such that v
1
̸= v
2
. (Note that if there are not two
such vectors, then |S| < 2 2J + 1.) Now let V
0
:= span({v
1
, v
2
})
and consider the
set S
0
:= v
1
+ v
2
+ S\{v
1
, v
2
}. Clearly |S
0
| = |S|2. Also S
0
V
0
since writing any
v S
0
as v = v
1
+ v
2
+ v
0
where v
0
S\{v
1
, v
2
},
B(v, v
1
) = B(v
0
+ v
1
+ v
2
, v
1
) = B(v
0
, v
1
) + B(v
1
, v
1
) + B(v
2
, v
1
) = 1 + 0 + 1 = 0,
B(v, v
2
) = B(v
0
+ v
1
+ v
2
, v
2
) = B(v
0
, v
2
) + B(v
1
, v
2
) + B(v
2
, v
2
) = 1 + 1 + 0 = 0.
55
Furthermore, for any v
3
, v
4
S
0
where v
3
̸= v
4
, there exists v
5
, v
6
S such that
B(v
3
, v
4
) = B(v
1
+ v
2
+ v
5
, v
1
+ v
2
+ v
6
)
= B(v
5
, v
6
) + B(v
5
, v
1
) + B(v
5
, v
2
) + B(v
1
, v
6
) + B(v
1
, v
1
)
+ B(v
1
, v
2
) + B(v
2
, v
6
) + B(v
2
, v
1
) + B(v
2
, v
2
)
= 1 + 1 + 1 + 1 + 0 + 1 + 1 + 1 + 0 = 1.
Therefore, by our inductive hypothesis, |S
0
| 2J 1. And so |S| = |S
0
|+ 2 2J + 1.
By Theorem 4.5.1, bin(Φ) is thus only nonempty when 2
J
+ 1 2J + 1, i.e. only
when J 2.
We already knew that the binder of the symplectic ETF is empty when p > 2,
but with this theorem we now also know that the binder of the symplectic ETF is
empty when p = 2 unless J {1, 2}.
When J = 1, Definition 4.1.2 gives that the symplectic ETF consists of 4 vectors
that span a space of dimension 3. It is thus itself a regular simplex and its binder
consists of the sole set V.
When J = 2, the space in question is F
4
2
without loss of generality. A member
of the binder in this case would be of size 2
J
+ 2 = 6, and the bound provided by
Theorem 4.5.2 is 2J + 2 = 6, so such members might exist. In [15], we find that the
binder is nonempty in this case. Theorem 3.3.2 tells us that this binder must form a
BIBD, and again referring to [15] we find that the number of blocks in this BIBD is
B = 16. Since we know V = 16 (the number of vectors in V), and K = 6 (one more
than the inverse of the Welch bound), we can solve for all of the parameters of this
BIBD using (13):
V = 16, K = 6, Λ = 2, R = 6, B = 16.
We recall from the proof of Theorem 4.5.1 that if K bin(Φ) then any shift of K is
56
also in bin(Φ). In this case, it happens that this characterizes all members of bin(Φ),
which is why there are 16 of them; given any one member K bin(Φ), the only other
members of bin(Φ) are those obtained by shifting it by a nonzero v V, of which
there are 15.
With the exception of these two cases, every symplectic ETF over F
2
has an empty
binder. This means that their spark exceeds their spark bound (2), which implies they
may be even more suitable for compressed sensing than they were already known to
be. This implication that every member of a highly symmetric infinite family of real
ETFs has an empty binder is an emergent result; it has seldom, if ever, been observed
for such frames before. This result motivates the continued pursuit of deterministic
matrices with excellent restricted isometry property constants.
57
V. Conclusions
Compressed sensing has emerged as a mathematical framework for mitigating
issues that arise by “pushing the envelope” of current sensing and communication
hardware. Typically, a matrix Φ is well-suited for compressed sensing only if its
coherence is small and its spark is large. Here it is natural to want Φ to be an
ETF if possible since it will have the smallest possible coherence (3) and so also the
largest possible lower spark bound (2). That said, an ETF Φ achieves equality in the
spark bound and therefore has the worst possible spark subject to the best possible
bound if and only if its binder is nonempty. When this occurs, its binder indicates
all linearly dependent subsequences of the ETF of minimal size, a set which is key to
understanding how likely a “compressed sensor” is to be confused.
In this thesis, we have shown that any DTETF has a binder that is either empty
or forms a BIBD (Theorem 3.3.2). A full classification of DTETFs has recently been
obtained [21]. Together, these facts suggest that this particular research community
should embark on a systematic program to determine whether the binder of any given
DTETF is empty or not, and in the latter case, to determine the parameters of the
corresponding BIBD. As a first step in that research program, we in this thesis have
moreover fully characterized the binder of every symplectic ETF and its Naimark
complement.
Specifically, in Theorem 4.5.2, we proved that the binder of any symplectic ETF Φ
is empty in all but finitely many cases, and therefore that excluding those trivially
small examples the spark of Φ is strictly greater than that guaranteed by the
spark bound (2). This is an unusual result for this community; we know of no other
instance where so many highly symmetric ETFs – including an infinite family of real-
valued ones have provably empty binders. This indicates that this family might
be particularly well-suited to compressed sensing applications, and motivates further
58
study in multiple ways. Clearly it raises the question of which other families of
DTETFs have provably empty binders. It also raises the question of “What exactly
is the spark of the symplectic ETF?” We have shown that it is strictly greater than
what is guaranteed by the spark bound, but the question of how much greater remains
to be answered.
Meanwhile, in Theorem 4.4.3, we proved that the binder of the Naimark com-
plement Ψ of any symplectic ETF Φ consists of all affine Lagrangian subspaces of
the underlying symplectic space. We went on to provide explicit formulae for the
parameters of the BIBD that they form. This means that we know spark(Ψ) exactly,
because Ψ achieves equality in its spark bound (2). It moreover means that we have
counted the number of linearly dependent subsequences of Ψ of size spark(Ψ); com-
paring it to the number of all subsequences of Ψ of that size reveals that a random
such subsequence is very likely to be linearly independent. Although the fact that
these ETFs achieve equality in the spark bound means that they are less than ideal
from the perspective of the restricted isometry property, it is worth noting that de-
termining the spark of an ETF is seemingly a hard task in general [1], and yet we
have nevertheless managed to do exactly this for every member of an infinite fam-
ily of them. Moreover, recalling Example 4.4.6, we know that there are precisely
60 linearly dependent 4-vector subsequences of Ψ. This raises the question of how
many 5-vector linearly dependent subsequences of Ψ there are. Clearly we can take
any such 4-vector subsequence and append one more vector, but are these ‘nested’
subsequences the only linearly dependent subsequences, or are there additional such
subsequences to be characterized?
Many other questions arise from these results that are worthy of additional investi-
gation. For example, when the binder of both a DTETF and its Naimark complement
are nonempty, what is the relationship between the corresponding BIBDs? This is
59
explored in our accompanying journal article [15], where it is shown that any block
of either BIBD is an oval of the other. While this thesis characterized the binders
of the symplectic ETF and its Naimark complement, there are several other known
infinite families of DTETFs. In particular, this thesis exploits an explicit formula
for the entries of the Gram matrix of the symplectic ETF to prove its results; this
suggests that similar formulae for other DTETFs may lead to similar results for those
ETFs. In our accompanying journal article [15], we perform just such an analysis to
calculate the binder of any member of two other particular types of DTETFs and
their Naimark complements. These arise from quadratic forms over F
2
, and turn
out to be intimately related to the instances of the symplectic ETF and its Naimark
complement with P = 2. In that context, for any positive integer J, Theorem 5.11 of
[15] remarkably implies that there exists a real DTETF that consists of 2
J1
(2
J
+ 1)
vectors that span a space of dimension
1
3
(2
J1
+ 1)(2
J
+ 1) whose binder is empty
whenever J 5. Such ETFs currently seem to be the most promising candidates
for deterministic matrices that might rival randomly chosen ones in terms of the
restricted isometry property.
60
Bibliography
1. B. Alexeev, J. Cahill, D. G. Mixon, Full spark frames, J. Fourier Anal. Appl.
18 (2012) 1167–1194.
2. A. S. Bandeira, E. Dobriban, D. G. Mixon, W. F. Sawin, Certifying the re-
stricted isometry property is hard, IEEE Trans. Inform. Theory 59 (2013)
3448–3450.
3. A. S. Bandeira, M. Fickus, D. G. Mixon, P. Wong, The road to deterministic
matrices with the Restricted Isometry Property, J. Fourier Anal. Appl. 19
(2013) 1123–1149.
4. B. Bodmann, E. J. King, Optimal arrangements of classical and quantum states
with limited purity, J. Lond. Math. Soc. 101 (2020) 393–431.
5. J. Bourgain, S. Dilworth, K. Ford, S. Konyagin, D. Kutzarova, Explicit con-
structions of RIP matrices and related problems, Duke Math. J. 159 (2011)
145–185.
6. E. J. Cand`es, The restricted isometry property and its implications for com-
pressed sensing, C. R. Math. Acad. Sci. Paris 346 (2008) 589–592.
7. D. L. Donoho, M. Elad, Optimally sparse representation in general (nonorthog-
onal) dictionaries via
1
minimization, Proc. Natl. Acad. Sci. USA 100 (2003)
2197–2202.
8. M. .F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly,
R. G. Baraniuk, Single-pixel imaging via compressive sampling, IEEE Signal
Process. Mag. 25 (2008) 83–91.
61
9. M. Fickus, J. W. Iverson, J. Jasper, E. J. King, Grassmannian codes from
paired difference sets, Des. Codes Cryptogr. 89 (2021) 2553–2576.
10. M. Fickus, J. Jasper, E. J. King, D. G. Mixon, Equiangular tight frames that
contain regular simplices, Linear Algebra Appl. 555 (2018) 98–138.
11. M. Fickus, J. Jasper, D. G. Mixon, J. D. Peterson, Tremain equiangular tight
frames, J. Combin. Theory Ser. A 153 (2018) 54–66.
12. M. Fickus, J. Jasper, D. G. Mixon, J. D. Peterson, Hadamard equiangular tight
frames, Appl. Comput. Harmon. Anal. 50 (2021) 281–302.
13. M. Fickus, J. Jasper, D. G. Mixon, J. D. Peterson, C. E. Watson, Equiangu-
lar tight frames with centroidal symmetry, Appl. Comput. Harmon. Anal. 44
(2018) 476–496.
14. M. Fickus, J. Jasper, D. G. Mixon, J. D. Peterson, C. E. Watson, Polyphase
equiangular tight frames and abelian generalized quadrangles, Appl. Comput.
Harmon. Anal. 47 (2019) 628–661.
15. M. Fickus, E. C. Lake, Doubly transitive equiangular tight frames that contain
regular simplices, submitted for publication, 2023, arxiv:2302.08879, 48 pages.
16. M. Fickus, D. G. Mixon, Tables of the existence of equiangular tight frames,
arXiv:1504.00253 (2016).
17. M. Fickus, D. G. Mixon, J. Jasper, Equiangular tight frames from hyperovals,
IEEE Trans. Inform. Theory. 62 (2016) 5225–5236.
18. M. Fickus, D. G. Mixon, J. C. Tremain, Steiner equiangular tight frames,
Linear Algebra Appl. 436 (2012) 1014–1027.
62
19. M. Fickus, C. A. Schmitt, Harmonic equiangular tight frames comprised of
regular simplices, Linear Algebra Appl. 586 (2020) 130–169.
20. J. W. Iverson, D. G. Mixon, Doubly transitive lines I: Higman pairs and roux,
J. Combin. Theory Ser. A 185 (2022) 105540.
21. J. W. Iverson, D. G. Mixon, Doubly transitive lines II: Almost simple symme-
tries, arXiv:1905.06859.
22. T. Strohmer, R. W. Heath, Grassmannian frames with applications to coding
and communication, Appl. Comput. Harmon. Anal. 14 (2003) 257–275.
23. J. A. Tropp, Greed is good: Algorithmic results for sparse approximation IEEE
Trans. Inform. Theory. 50 (2004) 2231–2242.
24. L. R. Welch, Lower bounds on the maximum cross correlation of signals, IEEE
Trans. Inform. Theory 20 (1974) 397–399.
25. H. Wilson, US Air Force Science and Technology Strategy: Strengthening
USAF Science and Technology for 2030 and Beyond, Department of the Air
Force Washington, 2019.
26. G. Zauner, Quantum designs: Foundations of a noncommutative design theory,
Ph.D. Thesis, University of Vienna, 1999.
63
REPORT DOCUMENTATION PAGE
Form Approved
OMB No. 0704–0188
The public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and
maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information, including
suggestions for reducing this burden to Department of Defense, Washington Headquarters Services, Directorate for Information Operations and Reports (0704–0188), 1215 Jefferson Davis Highway,
Suite 1204, Arlington, VA 22202–4302. Respondents should be aware that notwithstanding any other provision of law, no person shall be subject to any penalty for failing to comply with a collection
of information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS.
1. REPORT DATE (DD–MM–YYYY) 2. REPORT TYPE 3. DATES COVERED (From To)
4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER
5b. GRANT NUMBER
5c. PROGRAM ELEMENT NUMBER
5d. PROJECT NUMBER
5e. TASK NUMBER
5f. WORK UNIT NUMBER
6. AUTHOR(S)
7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) 8. PERFORMING ORGANIZATION REPORT
NUMBER
9. SPONSORING / MONITORING AGENCY NAME(S) AND ADDRESS(ES) 10. SPONSOR/MONITOR’S ACRONYM(S)
11. SPONSOR/MONITOR’S REPORT
NUMBER(S)
12. DISTRIBUTION / AVAILABILITY STATEMENT
13. SUPPLEMENTARY NOTES
14. ABSTRACT
15. SUBJECT TERMS
16. SECURITY CLASSIFICATION OF:
a. REPORT b. ABSTRACT c. THIS PAGE
17. LIMITATION OF
ABSTRACT
18. NUMBER
OF
PAGES
19a. NAME OF RESPONSIBLE PERSON
19b. TELEPHONE NUMBER (include area code)
Standard Form 298 (Rev. 8–98)
Prescribed by ANSI Std. Z39.18
23–03–2023 Master’s Thesis
Sept 2021 Mar 2023
Regular Simplices Within Doubly Transitive Equiangular Tight Frames
Evan C. Lake
Air Force Institute of Technology
Graduate School of Engineering and Management (AFIT/EN)
2950 Hobson Way
WPAFB OH 45433-7765
AFIT-ENC-MS-23-M-004
DISTRIBUTION STATEMENT A:
APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED.
An equiangular tight frame (ETF) yields an optimal way to pack a given number of lines into a given space of lesser dimension. Every ETF has minimal
coherence, and this makes it potentially useful for compressed sensing. But, its usefulness also depends on its spark: the size of the smallest linearly
dependent subsequence of the ETF. When formed into a sensing matrix, a larger spark means a lower chance that information is lost when sensing a sparse
vector. Spark is difficult to compute in general, but if an ETF contains a regular simplex, then every such simplex is a linearly dependent subsequence of the
ETF of minimal size, and so its spark is known. The “binder” of an ETF indicates all of the regular simplices that the ETF contains. If the binder of an
ETF is empty, then it contains no regular simplex, and so its spark is larger than otherwise guaranteed. Proving that either holds, namely proving that a
particular ETF’s binder is empty or instead proving that it is not, provides useful information about an ETF’s suitability for compressed sensing. In this
thesis, we focus on doubly transitive equiangular tight frames (DTETFs), namely ETFs whose symmetry group acts in a doubly transitive way. We show that
the binder of any DTETF is either empty or forms a balanced incomplete block design (BIBD), a fact that applies to several known infinite families of ETFs.
We then fully characterize the binders of every member of one such infinite family, symplectic ETFs, and their Naimark complements. We show that all but a
finite number of symplectic ETFs have an empty binder. We also provide a general, closed-form expression for the binder of the symplectic ETF’s Naimark
complement and the number of blocks that it contains; this disproves a conjecture posed in the recent literature.
equiangular tight frame, regular simplex, doubly transitive
U U U UU
70
Dr. Matthew C. Fickus, AFIT/ENC
937-255-3636x4513; matthew.fi[email protected]