## Category: Operator theory

### New paper “Compressions of compact tuples”, and announcement of mistake (and correction) in old paper “Dilations, inclusions of matrix convex sets, and completely positive maps”

Ben Passer and I have recently uploaded our preprint “Compressions of compact tuples” to the arxiv. In this paper we continue to study matrix ranges, and in particular matrix ranges of compact tuples. Recall that the matrix range of a tuple $A = (A_1, \ldots, A_d) \in B(H)^d$ is the the free set $\mathcal{W}(A) = \sqcup_{n=1}^\infty \mathcal{W}_n(A)$, where

$\mathcal{W}_n(A) = \{(\phi(A_1), \ldots, \phi(A_d)) : \phi : B(H) \to M_n$ is UCP $\}$.

A tuple $A$ is said to be minimal if there is no proper reducing subspace $G \subset H$ such that $\mathcal{W}(P_G A\big|_G) = \mathcal{W}(A)$. It is said to be fully compressed if there is no proper subspace whatsoever $G \subset H$ such that $\mathcal{W}(P_G A\big|_G) = \mathcal{W}(A)$.

In an earlier paper (“Dilations, inclusions of matrix convex sets, and completely positive maps”) I wrote with other co-authors, we claimed that if two compact tuples $A$ and $B$ are minimal and have the same matrix range, then $A$ is unitarily equivalent to $B$; see Section 6 there (the printed version corresponds to version 2 of the paper on arxiv). This is false, as subsequent examples by Ben Passer showed (see this paper). A couple of other statements in that section are also incorrect, most obviously the claim that every compact tuple can be compressed to a minimal compact tuple with the same matrix range. All the problems with Section 6 of that earlier paper “Dilations,…” can be quickly  fixed by throwing in a “non-singularity” assumption, and we posted a corrected version on the arxiv. (The results of Section 6 there do not affect the rest of the results in the paper, and are somewhat not in the direction of the main parts of that paper).

In the current paper, Ben and I take a closer look at the non-singularity assumption that was introduced in the corrected version of “Dilations,…”, and we give a complete characterization of non-singular tuples of compacts. This characterization involves the various kinds of extreme points of the matrix range $\mathcal{W}(A)$. We also make a serious invetigation into fully compressed tuples defined above. We find that a matrix tuple is fully compressed if and only if it is non-singular and minimal. Consequently, we get a clean statement of the classification theorem for compacts: if two tuples $A$ and $B$ of compacts are fully compressed, then they are unitarily equivalent if and only if $\mathcal{W}(A) = \mathcal{W}(B)$.

### The complex matrix cube problem summer project – summary of results

In the previous post I announced the project that I was going to supervise in the Summer Projects in Mathematics week at the Technion. In this post I wish to share what we did and what we found in that week.

I had the privilege to work with two very bright students who have recently finished their undergraduate studies: Mattya Ben-Efraim (from Bar-Ilan University) and Yuval Yifrach (from the Technion). It is remarkable the amount of stuff they learned for this one week project (the basics of C*-algebras and operator spaces), and that they actually helped settle the question that I raised to them.

I learned a lot of things in this project. First, I learned that my conjecture was false! I also learned and re-learned some programming abilities, and I learned something about the subtleties and limitations of numerical experimentation (I also learned something about how to supervise an undergraduate research project, but that’s besides the point right now).

### The complex matrix cube problem (in “Summer Projects in Mathematics at the Technion”)

Next week I will participate as a mentor in the Technion’s Summer Projects in Mathematics. The project I offered is called “Numerical explorations of open problems from operator theory”, and it suggests three open problems in operator theory where theoretical progress seems to be stuck, and for which I believe that some computer experiments can help us get a feeling of what is going on. I also hope that thinking seriously about designing experiments can help us to understand some general facets of the theory.

I have been in contact with the students in the last few weeks and we decided to concentrate on “the matrix cube problem”. On Sunday, when the week begins, I will need to present the background to the project to all participants of this week, and I have seven minutes (!!) for this. As everybody knows, the shorter the presentation, the harder the task is, and the more preparation and thought it requires. So I will take use this blog to practice my little talk.

#### Introduction to the matrix cube problem

This project is in the theory of operator spaces. My purpose is to give you some kind of flavour of what the theory is about, and what we will do this week to contribute to our understanding of this theory.

### Souvenirs from the Red River

Last week I attended the annual Canadian Operator Symposium, better known in its nickname: COSY. This conference happens every year and travels between Canadian universities, and this time it was held in the University of Manitoba, in Winnipeg. It was organized by Raphaël Clouâtre and Nina Zorboska, who altogether did a great job.

My first discovery: Winnipeg is not that bad! In fact I loved it. Example: here is the view from the window of my room in the university residence:

Not bad, right? A very beautiful sight to wake up to in the morning. (I got the impression, that Winnipeg is nothing to look forward to, from Canadians. People of the world: don’t listen to Canadians when they say something bad about any place that just doesn’t quite live up to the standard of Montreal, Vancouver, or Banff.) Here is what you see if you look from the other side of the building:

The conference was very broad and diverse in subjects, as it brings together people working in Operator Theory as well as in Operator Algebras (and neither of these fields is very well defined or compact). I have mixed feelings about mixed conferences. But since I haven’t really decided what I myself want to be working on when I grow up, I think they work for me.

I was invited to give a series of three talks that I devoted to noncommutative function theory and noncommutative convexity. My second talk was about my joint work with Guy Salomon and Eli Shamovich on the isomorphism problem for algebras of bounded nc functions on nc varieties, which we, incidentally, posted on the arxiv on the day that the conference began. May I invite you to read the introduction to that paper? (if you like it, also take a look at the previous post).

On this page you can find the schedule, abstract, and the slides of most of the talks, including mine. Some of the best talks were (as it happens so often) whiteboard talks, so you won’t find them there. For example, the beautiful series by Aaron Tikuisis was given like that and now it is gone (George Elliott remarked that a survey of the advances Tikuisis describes would be very desirable, and I agree).

#### 1. The “resolution” of Elliott’s conjecture

Aaron Tikuisis gave a beautiful series of talks on the rather recent developments in the classification theory of separable-unital-nuclear-simple C*-algebras (henceforth SUNS C*-algebras, the algebra is also assumed infinite dimensional, but let’s make that a standing hypothesis instead of complicating the acronym). I think it is fair to evaluate his series of talks as the most important talk(s) in this conference. In my opinion the work (due to many mathematicians, including himself) that Tikuisis presented can be described as the resolution of the Elliott conjecture; I am sure that some people will disagree with the last statement, including George Elliott himself.

Given a SUNS C*-algebra $A$, one defines its Elliott invariant, $E\ell \ell(A)$, to be the K-theory of $A$, together with some additional data: the image of the unit of $A$ in $K_0(A)$, the space of traces $T(A)$ of $A$, and the pairing between the traces and K-theory. It is clear, once one knows a little K-theory, that if $A$ and $B$ are isomorphic C*-algebras, then their Elliott invariants are isomorphic, in the sense that $K_i(A)$ is isomorphic to $K_i(B)$ for $i=0,1$ (in a unit preserving way), and that $T(A)$ is affinely homeomorphic with $T(B)$ in a way that preserves the pairing with the K-groups. Thus, if two C*-algebras are known to have a different K-group, or a different Elliott invariant, then these C*-algebras are not isomorphic. This observation was used to classify AF algebras and irrational rotation algebras (speaking of which, I cannot help but recommend my friend Claude Schochet’s recent “Notice’s” article on the irrational rotation algebras).

In the 1990s, George Elliott made the conjecture that two SUNS C*-algebras are *-isomorphic if and only if $E \ell \ell (A) \cong E \ell \ell (B)$. This conjecture became one of the most important open problems in the theory of operator algebras, and arguably THE most important open problem in C*-algebras. Dozens of people worked on it. There were many classes of C*-algebras that were shown to be classifiable – meaning that they satisfy the Elliott conjecture – but eventually this conjecture was shown to be false in 2002 by Rordam, who built on earlier work by Villadsen.

Now, what does the community do when a conjecture turns out to be false? There are basically four things to do:

1. Work on something else.
2. Start classifying “clouds” of C*-algebras, for example, show that crossed products of a certain type are classifiable within this family (i.e. two algebras within a specified class are isomorphic iff their Elliott invariants are), etc.
3. Make the class of algebras you are trying to classify smaller, i.e., add assumptions.
4. Make the invariant bigger. For example, $K_0(A)$ is not enough, so people used $K_1(A)$. When that turned out to be not enough, people started looking at traces. So if the current invariant is not enough, maybe add more things, the natural candidate (I am told) being the “Cuntz Semigroup”.

The choice of what to do is a matter of personal taste, point of view, and also ability. George Elliott has made the point that choosing 4 requires one to develop new techniques, whereas choosing 3 is kind of focused around the techniques, making the class of C*-algebras smaller until the currently known techniques can tackle them.

Elliott’s objections notwithstanding, the impression that I got from the lecture series was that most main forces in the field agreed that following the third adaptation above was the way to go. That is, they tried to prove the conjecture for a slightly more restricted class of algebras than SUNS. Over the past 15 years or so (or a bit more), they identified an additional condition – let’s call it Condition Z – that, once added to the standard SUNS assumptions, allows classification. And it’s not that adding the additional assumptions made things really easy, it only made the proof possible – still it took first class work to even identify what assumption needs to be added, and more work to prove that with this additional assumptions the conjecture holds. They proved:

Theorem (lot’s of people): If $A$ and $B$ are infinite dimensional SUNS C*-algebras, which satisfy the Universal Coefficient Theorem and an additional condition Z, then $E\ell \ell (A) \cong E \ell \ell (B)$ if and only if $A \cong B$.

I consider this as the best possible resolution of the Elliott conjecture possible, given that it is false!

A major part of Aaron’s talks was to explain to us what this additional condition Z is. (What the Universal Coefficient Theorem though, was not explained and, if I understand correctly, it is in fact not known whether this doesn’t follow immediately for such algebras). In fact, there are two conditions that one can take for “condition Z”: (i) Finite nuclear dimension, and (ii) Z-stability. The notion of nuclear dimension corresponds to the regular notion of dimension (of the spectrum) in the commutative case. Z-stability means that the algebra in question absorbs the Jiang-Su algebra under tensor products in a very strong sense. Following a very long tradition in talks about the Jiang-Su algebra – Aaron did not define the Jiang-Su algebra. This is not so bad, since he did explain in detail what finite nuclear dimension means, and said that Z-stability and finite nuclear dimension are equivalent for infinite dimensional C*-algebras (this is the Toms-Winter conjecture).

What was very nice about Aaron’s series of talks was that he gave von Neumann algebraic analogues of the theorems, conditions, and results, and explained how the C*-algebra people got concrete inspiration from the corresponding results and proofs in von Neumann algebras. In particular he showed the parallels to Connes’s theorem that every injective type $II_1$ factor with separable predual is isomorphic to the hyperfinite $II_1$ factor. He made the point that separable predual in the von Neumann algebra world corresponds to separability for C*-algebras, hyperfiniteness corresponds to finite nuclear dimension, and factor corresponds to a simple C*-algebra. He then sketched the lines of the proof of the part of Connes’s theorem that says that injectivity of a $II_1$ factor $M$ implies hyper-finiteness of $M$ (which by Murray and von Neumann’s work implies  that $M$ is the hyperfinite $II_1$ factor). After that he repeated a similar sketch for the proof that $Z$-stability implies finite nuclear dimension.

This lecture series was very inspiring and I think that the organizers made an excellent choice inviting Tikuisiss to give this lecture series.

#### 2. Residually finite operator algebras and a new trick

Christopher Ramsey gave a short talk on “residually finite dimensional (RFD) operator algebras”. This talk is based on the paper that Chris and Raphael Clouatre recently posted on the arxiv. The authors take the notion of residual finite dimensional, which is quite well studied and understood in the case of C*-algebras, and develop it in the setting of nonselfadjoint operator algebras. It is worth noting that even a finite dimensional nonselfadjoint operator algebra might fail to be representable as a subalgebra of a matrix algebra. So it is worth specifying that an operator algebra is said to be RFD if it can be completely isometrically embedded in a direct sum of matrix algebras (and so it is not immediate that a finite dimensional algebra is RFD, though they prove that it is).

What I want to share here is a neat and simple observation that Chris and Raphael made, which seemed to have been overlooked by the community.

When we study operator algebras, there are several natural relations by which to classify them: completely isometric isomorphism, unitary equivalence, completely bounded isomorphism, and similarity. Clearly, unitary equivalence implies completely isometric isomorphism, and similarity implies completely bounded isomorphism. The converses do not hold. However, in practice, many times (for example in my recent paper with Guy and Eli) operator algebras are shown to be completely boundedly isomorphic by exhibiting a similarity between them. That happens because we are many times interested in the “multiplicity free case”.

[Added in June 11, following Yemon’s comment: We say that $A \subset B(H)$ is similar to $B \subseteq B(K)$ if there is an invertible $T \in B(H,K)$ such that $A = T^{-1}BT$. Likewise, two maps $\rho : A \to B(H)$ and $\phi: A \to B(K)$ are said to be similar if there is an invertible $T \in B(H,K)$ such that $\rho(a) = T^{-1} \phi(a) T$ for all $a \in A$. Paulsen’s theorem says that if $\rho : A \to B(H)$ is a completely bounded representation then it is similar to a completely contractive representation $\phi : A \to B(H)$. ]

Raphael and Chris observed that, in fact, completely bounded isomorphism is the same as similarity, modulo completely isometric isomorphisms. To be precise, they proved:

Theorem (the Clouatre-Ramsey trick): If $A$ and $B$ are completely boundedly isomorphic, then $A$ and $B$ are both completely isometrically isomorphic to algebras that are similar.

Proof: Suppose that $A \subseteq B(H)$ and $B \subseteq B(K)$. Let $\phi : A \to B$ be a c.b. isomorphism. By Paulsen’s theorem, $\phi$ is similar to a completely contractive isomorphism $\psi$. So we get that the map

$a \mapsto a \oplus \psi(a) \mapsto a \oplus \phi(a) \in B(H) \oplus B(K)$

decomposes as a product of a complete isometry and a similarity. Likewise, the completely bounded isomorphism $\phi^{-1}$ is similar to a complete contraction $\rho$, and we have that

$\phi^{-1}(b) \oplus b \mapsto \rho(b) \oplus b \mapsto b$

decomposes as the product of a similarity and a complete isometry. Since the composition of all these maps is $\phi$, the proof is complete.

### Minimal and maximal matrix convex sets

The final version of the paper Minimal and maximal matrix convex sets, written by Ben Passer, Baruch Solel and myself, has recently appeared online. The publisher (Elsevier) sent us a link through which the official final version is downloadable, for anyone who clicks on the following link before May 26, 2018. Here is the link for the use of the public:

Abstract. For every convex body $K \subseteq \mathbb{R}^d$, there is a minimal matrix convex set $\mathcal{W}^{min}(K)$, and a maximal matrix convex set $\mathcal{W}^{max}(K)$, which have $K$ as their ground level. We aim to find the optimal constant $\theta(K)$ such that $\mathcal{W}^{max}(K) \subseteq \theta(K) \cdot \mathcal{W}^{min}(K)$. For example, if $\overline{\mathbb{B}}_{p,d}$ is the unit ball in $\mathbb{R}^d$ with the $p$-norm, then we find that
$\theta(\overline{\mathbb{B}}_{p,d}) = d^{1-|1/p-1/2|}$ .
This constant is sharp, and it is new for all $p \neq 2$. Moreover, for some sets $K$ we find a minimal set $L$ for which $\mathcal{W}^{max}(K) \subseteq \mathcal{W}^{min}(L)$. In particular, we obtain that a convex body $K$ satisfies $\mathcal{W}^{max}(K) = \mathcal{W}^{min}(K)$ only if $K$ is a simplex.
These problems relate to dilation theory, convex geometry, operator systems, and completely positive maps. For example, our results show that every $d$-tuple of self-adjoint contractions, can be dilated to a commuting family of self-adjoints, each of norm at most $\sqrt{d}$. We also introduce new explicit constructions of these (and other) dilations.