First, it is very healthy to encourage researchers to open their eyes and look around, instead of concentrating always on their own work – either racing for another publication or “selling” it. At the very least being asked to speak about somebody else’s work, it is guaranteed that I will learn something new in the workshop!

The second reason why I think that this is a very welcome idea is maybe a bit deeper. Every mathematician works to solve their favorite problems or develop their theories, but every once in a while it is worthwhile to stop and think: what do we make out of all this? What are the results/theories/points of view that we would like to carry forward with us? The tree can’t grow in all directions with no checks – we need to prune it. We need to bridge the gap between the never stopping flow of papers and results, on one side, and the textbooks of the future, on the other side.

With these ambitious thoughts in mind, I chose to speak about Davidson and Kennedy’s paper “Noncommutative Choquet theory” in order to force myself to digest and internalize what looked to me to be an important paper from the moment it came out, and with this I hoped to stop a moment and rearrange my mental grip on noncommutative function theory and noncommutative convexity.

The theory developed by Davidson and Kennedy and its precursors were inspired to a very large extent by classical Choquet theory. It therefore seems that to understand it properly, as well as to understand the reasoning behind some of the definitions and approaches, one needs to be familiar with this theory. So one possible natural way to start to describe Davidson and Kennedy’s theory is by recalling the classical theory that it generalizes.

But I didn’t want to explain it in this way, because that is the way that Davidson and Kennedy’s exposition (both in the papers and in some talks that I saw) goes. I wanted to start with the noncommutative point of view from the outset. I **did** use the classical (i.e. commutative case) for a tiny bit of motivation but in a somewhat different way, which rests on stuff **everybody** knows. So, I did a little expository experiment, and if you think it blew up then everybody can simply go and read the original paper.

Here are my “slides”:

The conference webpage will have video recordings of all talks at some point.

]]>**The bottom line of Kolodny’s talk and Wieman’s paper is that the university lecture as we know it doesn’t work and is a waste of time. They have some ideas how to fix it, an approach that – as a first approximation – we can call “technology driven flipped classroom”.** **To me, the most disturbing parts of their approach are (1) that they believe that their opinions are “science based”, and therefore (2) they believe in promoting institutional change. These two aspects worry more than any technical discussion whether we should flip the classroom sideways or upside-down. **

Kolodny remarked during his talk (I am paraphrasing): “I am not here to bury the concept of a lecture. Lectures are good and important. In fact, I am giving a lecture at this very moment. But you should remember that lectures are no good at passing information. In a lecture you motivate, you stimulate, you do propaganda. I’m here to do propaganda”.

Certainly I was stimulated by the talk, I was motivated to look up and then read Wieman’s paper, but most of all I was angry, I felt that someone was trying to brainwash me to believe in a certain ideology, rather than sharing some insights on teaching. Part of what made me feel this way was the “scientific approach” rhetoric. Another thing that bothered me was the jump from facts (some problems that almost everybody will agree on) to conclusions (a particular pedagogical methodology is the only way that works), disregarding tradition as not much more than momentum. Indeed, it felt like propaganda.

In this post I want to record my thoughts on some arguments raised by flipped classroom enthusiasts, and in particular on two aspects: the “scientific approach” approach, and with it the claim that lectures don’t work and we have to revolutionize the whole structure of courses to make them work.

I wish to recommend reading Wieman’s paper. Not only so that you can appreciate my criticism, but because it is a well reasoned piece of work by someone who has not only thought deeply about, but also researched the subject. I have a lot of respect for his efforts.

I am focusing my criticism on his paper, because it is written and available and interesting. But I am really arguing with talks, lectures, discussions, blog posts etc. that I have seen through the years, and have got me thinking for a long time. Now is just an opportunity to pour all of this out.

So, why not try a scientific approach to science education? Here’s why not:

Wieman advocates that teaching should be approached “with the same rigorous standards of scholarship as scientific research”.

I propose that not every human endeavor should be treated “with the same rigorous standards…” etc. Consider, for example, making scrambled eggs. One can experiment with egg making, weigh the eggs, make precise time measurements of different attempts, hand out questionnaires to members of the family to see what they like best. After obtaining the results, we should do our best to stick to the optimal results, frying the eggs the precise amount of time determined by the egg-weight-to-fry-time formula…

Ok, you get the idea. I am trying to show the absurdity of the idea that ** everything** should be done in a scientific approach. Some activities, like teaching, are not an experiment with a well defined numeric outcome, but rather a getting together of human beings, with goals, one of them being the gathering itself, the meeting, the live exchange of ideas and reactions. I claim that teaching – like making a scrambled eggs or kissing or riding a bicycle – is an activity that humans can do very well without the pretence of approaching it scientifically.

By calling something a “scientific approach” you win the argument from the outset through intimidation. Nobody wants to be anti-science. But civilization has developed several academic disciplines – not all of them sciences – and it is sometimes inappropriate to use a certain paradigm to attack a problem in a different field. Not all forms of intelligent reasoning are scientific, and, moreover, not all forms of wisdom and knowledge are academic. Personal experience and tradition have value.

Here’s an idea, along the lines of Wieman’s suggestion: let’s try a logical approach to mathematics education. Why don’t we start by making some definitions say: *learning, student, teacher,* then choose the axioms and deduction rules, and then simply *prove that learning has taken place*. Wouldn’t that be the logical thing to do?

Of course, that’s a ridiculous idea, because we are imposing our tools on a matter to which they do not apply. If a physicist tells me that they have an idea on how to carry out teaching and that they have proved – using the rigorous standards of scientific research – that this is the best approach, then I do not feel compelled to listen further. To me, that sounds like a logician telling me that they proved that this or that approach to teaching is the only correct approach. I would be much more compelled to listen to a professor tell me what they have learned from their experience.

Let me recall what I am attacking: a “scientifically founded” claim that traditional teaching methods don’t work and need to be replaced. I believe that traditional teaching works to a large extent and that one should be ** extremely careful** when introducing institutional change, creating new norms “for every teacher in every classroom” (as Wieman advocates).

Why do I believe that the lectures work? Because when I was a student, that’s precisely how I learned. ** Of course** I didn’t learn just by showing up in class. In class I encountered the material, I listened but also

Objection: “So what if you, the individual, succeeded? You became a math professor! Not all students end up being math professors, what about them?”

But I was not the only one who succeeded studying this way. Many of my fellow students did the same. Lots of people I know succeeded, and most of them are not professors today. One cannot deny that something is working.

Wieman reports on questionnaires handed out to students after lectures that show that students hardly recall anything after a lecture. This is very alarming news! But if I know that from experience that students do learn in the current method, then I have to do what every scientist must do when the results of an experiment contradict what they know: I have to question not only what I know, but also the experiment. Perhaps these questionnaires are measuring the wrong thing? Perhaps I need to be careful with the interpretation of their results? Surely, I need to be careful about drawing conclusions.

I know that the traditional course can work because it has worked for me and my friends, it has worked for the generations before us, and to some extent it is working for the generation that we are now teaching.

The fact that the “lecture” form of teaching has essentially not changed through centuries, is often brought up as a reason why it should be abolished. “We have been teaching the same way as we have been for centuries!” I leave it to you to think whether there are things that humans have been doing in essentially the same way for centuries, and whether that in itself is reason for change.

Wieman thinks that “Practices and conclusions [should be] based on objective data rather than — as is frequently the case in education — anecdote or tradition”.

Traditions develop for reasons and serve a purpose. Traditions also change with time, and adapt to reality. There is nothing holy about the traditional way of teaching – it is not a religion that we are practicing – but we should not ignore the wisdom encoded in a certain tradition which is the reason why it has survived. ** Tradition** is not synonymous with

So much for tradition. What about basing practices on **anecdote**? I think that Wieman is mixing things up: teachers learn how to teach, decide on practices and reach conclusions based on their ** experience**. Referring to a person’s experience as a bunch of anecdotes ignores the innate ability of humans to make connections, find patterns, and learn how to carry out complex tasks. Imagine that somebody tells you “I learned to drive through a series of anecdotes”. Ridiculous!

Every department has a cadre of experienced and talented teachers and develops its own organizational memory. The combined experience of the lecturers, the organizational memory of the institute and the traditions of the discipline combine to shape the curriculum, the courses, and the lectures. Certainly, knowledge and research in science education can play an important role, but it cannot replace experience and tradition.

Another important aspect of teaching within a certain tradition, is that lecturers who are teaching in the same way that they studied as students have a better understanding of how the course looks like to the students, they have their own first hand experience on what works and what doesn’t.

In Kolodny’s talk, he said that homework assignments should be extremely easy so that students won’t be tempted to cheat. He literally means it: make homework so easy that looking up the answers is more difficult than coming up with them. He is basically saying that we should give up on student responsibility. Two messages here: (a) the students are not responsible players (b) we have no way of influencing that. I reject this.

In the paper, Wieman mentions “Just-in-time teaching”, a technique developed by Novack, Gavrin, Patterson and Christian. “*The technique uses the Web to ask students questions concerning the material to be covered, questions that they must answer just before class. The students thus start the class already engaged, and the instructor, who has looked at the students’ answers, already knows a reasonable amount about their difficulties with the topic to be covered.*”

When I was a student I reviewed the notes I took one lecture, before going to listen to the following lecture, so that I was always prepared. I tell my students that they should do this or some equivalent (read the section in the book, watch a video), and sometimes I tell them explicitly: “please go over this proof before next time”. Like all my colleagues I am aware that many students don’t actually do this. The consequence of not being prepared for class is not understanding it. It is an educational struggle to try get the students to take responsibility for their learning, and requires us to find ways of motivating them. This educational struggle has value.

The idea to send questions to the students before a lecture is an example of what I call *micro-motivations*. You give up on having motivated students to begin with, you give up on being able to motivate. Instead, you use technology to give your students frequent, small, low-expectations tasks so that they don’t tune out. And you continuously monitor the students. I feel that this is setting the bar very low. For the students, the consequence of not being prepared used to be that they didn’t understand the lecture; with the just-in-time approach, the consequence for students not being prepared is that the instructor has to redesign the course! I wonder how this will play out.

Some of the insights Wieman raises are pure gold. For example, he writes that “even the most thoughtful, dedicated teachers spend enormously more time worrying about their lectures than they do about their homework assignments, which I think is a mistake”. That’s a wonderful point, and I’d like to even take it one step forward: as you plan a week ahead in your course, prepare your lecture and the assignments in a complementary way (for example, plan to omit some parts of the proof knowing that it will appear in the homework, which will motivate the students to look back at the proof and understand it better; or give an exercise in week n that will resonate with something you will do in class in week n+1; etc.).

However, a lot of Wieman’s golden insights are presented as the logical corollaries of scientific discoveries in cognitive science, whereas to me they seem to be basic conclusions that anyone who has taught or thought about teaching should have reached. Wieman cites studies in cognitive science that say that in order to develop different ways of thinking about a subject, students require “extended, focused, mental effort” and also that “extended, highly focused mental processing is required to build those little proteins that make up the long-term memory”. I admit that it never occured to me that proteins were involved, but didn’t we all know that designing the homework assignments is important and that “extended, focused, mental effort” is essential for learning?

Using science to justify a plain conclusion that we could all reach is one thing, mostly harmless. But basings your assumptions on science and then jumping to conclusions, and calling this process science, is something different, which scares me. We should be careful not to jump to conclusions.

For example, science confirms that students require “extended, focused, mental effort” to learn. But is the (flipped) classroom the correct place to carry out this extended effort? My experience is, that I always needed some time **alone** to think things through. Having an instructor hovering around the class is distracting. Having to work with peers is nice, but maybe not for everybody. Maybe not for the very weak, tagging along, shutting up so as not to slow everybody down. Maybe not for the very strong. Maybe not for the very shy.

This is an example of why I call what Wieman is doing propaganda. He claims to be approaching the subject as a scientist, giving himself the high ground in the argument. He cites results from cognitive science that support some basic insights that every teacher has reached through experience and common sense. But then he has conclusions about **what to do**, and these conclusions are no longer decisively settled by science.

Are there no studies that support the institutional changes that Wieman advocates? Yes, there are studies, but these studies are no longer in science, but in science education – a respectable discipline, but not a science.

Teaching is a task that we want to do well. With the goal of improving science education, Wieman proposes what he calls a scientific approach. He defines some measurable quantities, which can be used to compare between teaching strategies, and then proposes a certain teaching technique, and then he shows that the numerical value of the measurable quantity obtained when using his approach is higher than what you get using the traditional methods. And thus, he gets to write: “*We now have good data showing that traditional approaches to teaching science are not successful for a large proportion of our students, and we have a few research-based approaches that achieve much better learning*.”

Now Wieman has won the Nobel Prize in physics, and who the hell am I? But I dare say that what he is doing in the context of education is not science, it is engineering. Compared to Wieman, I know nothing about science, but I do have some experience in engineering. And what Wieman and other education researchers are doing is precisely what people in image processing, applied optics and neural networks (fields I am familiar with) do:

- You have a complex task to carry out.
- You define metrics that measure how good a solution to the task is.
- You devise a new method to solve the task.
- You carry out a test or an experiment, measure the results, and find out that your method is better than the competing ones.
- You publish the result.

I am not claiming that this bad, I am saying that this is not science. That’s ok. As I remarked above, science is not the only intelligent way to reason. So what these people are doing is trying to improve teaching by engineering courses so that certain measurable quantities are maximized. It’s interesting and legitimate and might be useful, but that’s not how I want to look at education. Please stay away from *my* classroom, and please don’t give me any forms to fill, rubrics to check, or any other kind of bureaucracy.

There is a joke about engineers a friend once told me.

*In the cut-throat field of image denoising, a group of engineers set out to find the best method out of the millions published, to accomplish the task of denoising images taken by a smartphone. For this meta-study, they developed a program based on machine learning algorithm,s that will read all papers on the subject, compare their results, and decide by state-of-the-art statistical tools which is the best method. The program ran for forty days and forty nights, and then spat out the output, which was the name of the best method of all: “our”.*

(Did you get it? All papers say something along the lines “**our method** improves on the previously known…” etc.)

But seriously, I believe that any devoted and talented teacher will probably be able to improve the teaching, and surely be able to improve a given measurable quantity, if they invent a new teaching approach they believe in and enthusiastically deliver it (especially if the students know it is a great new method, and even more so if the teacher doesn’t shy from motivating them with propaganda such as “science based” and so forth).

Over the last two decades or so, the field of neural networks has exploded, and this has led to several revolutions in one field of engineering after another. Since we are talking about engineering education, and since this is 2021, we have to discuss neural nets.

A neural network is paradigm of computation that is (in a loose sense) inspired by how the brain works. Let me briefly describe the idea, oversimplifying for the sake of people with no engineering background.

Consider the task of classifying images on your google photos account into categories: people, pets, scenery, food. The old way of approaching such a computer vision task was to think hard about characteristics of people versus characteristics of pets, say, people have eyes that look like this, pets have eyes that look like that, people stand on two legs, pets usually stand on four, etc. This is the “science” part, the part where we use our knowledge of the world, and some sophisticated buzzwords like “shape theory” can be employed. Then you would try to engineer various feature detectors or shape finders, and you would scan the image looking for pairs of eyes, etc. And then you embed these feature detectors in a high level program that looks for many many features and tries to make intelligent decisions, like taking into consideration also the color of the photo and what we know about the color of people or pets.

The current neural network approach is totally different. A human can tell the difference between a picture of a person and a picture of a dog. We don’t count the legs and say: “four legs. it’s a dog!” No, we know it’s a dog, because we have seen lots of dogs and lots of people. So let’s **train **the computer to tell the difference. We take a database of a million pictures and get somebody to tag them appropriately. We show these million pictures to the neural network, and then by an iterative algorithm it learns how to classify the images. The “science” part is gone. We never had to tell the neural network that a dog has four legs, the network learned this by itself from looking at examples.

Coming back to education, the best contemporary approach to the science-education engineering problem would be to try a neural network. Trying to hand engineer how we teach using our scientific knowledge (e.g., using a fact like 10% retention of information after 50 minutes) is the obsolete approach. What we need is a rich enough, big enough, fast enough neural network that will learn from experience.

Surely you knew where I was going with this. But I am not joking. Neural networks beat the old model-based kinds of algorithms when the task is very complicated, and we give up on having model that we can understand (and in lot’s of simpler tasks such solving linear equations or computing the DFT, classical algorithms still prevail). Engineers and scientists have found the modesty to admit that there are problems which they cannot solve with a model-based approach or a direct algorithm, and they let neural networks take over. I believe that education is precisely the kind of complicated field where thinking hard how to do things bottom-up just won’t work. But we don’t need artificial neural networks, since we have real ones – the teachers.

In fact, one of the developments in the field of neural networks is to use not just one neural net but a cluster of neural networks that makes decisions by consensus or by majority vote. There you have it: the engineering approach to education has led us back to the good old faculty meeting (what a bore!).

One day I saw the book “How to Teach Mathematics” by Steven Krantz in a pile of books outside the library, and immediately took it home. It’s interesting to read (I’ll lend it to you if you happen to be in my area :-). Krantz believes in the “Mage on the Stage” way of teaching, as opposed to the “Guide on Side” approach, which he attacks. In short, he believes in keeping on doing things the good old way. A very nice thing about this book is the appendix, in which there appear ** a dozen** essays on teaching by other writers, some of them with quite a different point of view from Krantz. Now, this book was written in 1999, and the funny thing is that the grumpy old men that wanted to keep on doing things the good old way, have not suffered too much from the passage of time. The reformists, however, have aged much worse, in my opinion. While the mages on the stage kept giving the same lectures over the past two decades with whatever success, people who believed that the programming language ISETL was a harbinger of future mathematics education have probably had to reinvent their teaching philosophy at least once during this time. This wasted effort is something to consider when considering reform, especially radical reform.

Not to mention “clickers”.

There are other examples of teaching reforms gone sour. Israelis can recall the notorious BDIDIM , or various changes to the way reading is taught, revolutions that were eventually reversed.

I have much respect for people who do research on education and I think we should hear what they have to say. I myself am not interested enough that I will study regularly papers or books on the subject, nor carry out my own research on the matter, but I insist that I can take part in the discussion without being a scholar. Education is not condensed-matter-physics, we all have a stake and a say.

Here is an ironic example. Wieman writes about the need for means of measuring learning outcomes, and adds: “*We do have student evaluations of instructors, but these are primarily popularity contests and not measures of learning*.” But in fact, I have been to a lecture on student evaluations by Prof. Nira Hativa where she convinced me that student evaluations are **definitely not primarily popularity contests** and are actually quite effective (see the abstract to this book; note that the book is from 2013, but I saw the lecture – based on extensive research – before 2009, the year Wiemann wrote his paper). On the other hand, the abstract of the book itself refers to numerous other studies that reach the opposite conclusion: “*Every year, many new publications claim to “prove” that SRIs are unreliable and invalid*“.

So how can we use the research literature, if it contains contradictions? Does it mean that we have to throw it out the window?

No. Remember: I claim that science-education is not a science, in any case we can all agree that it is not mathematics. So *both points of view can be true*, in some sense. To illustrate this, I will continue with my example.

I used to disregard teaching evaluations. I told myself that the good students probably give me good grades, and that the idiots are the ones that give me poor grades (unfortunately there seemed to be more idiots than good students – this correlated reasonably well with the final grade distribution). But in Prof. Hativa’s lecture one of the things she showed was that actually the student grades and the grades that they gave to instructors were not correlated. This is a fact – she had access to the data and that’s what it showed, year after year. I realized that it is simply too easy for me to dismiss student evaluations and I made a decision at very low cost, with no risk, and with a possibly high benefit: I decided to carefully read student evaluations and see if I can use them to improve as a teacher.

On the other hand, administrators should be aware of the numerous studies showing that there are possible problems with student evaluations, and be careful in using them for making decisions about people.

I am not against making changes to the way in which I deliver courses – I’ve tried a lot of new things over the years (just ask and I’ll be happy to tell you). Actually, I never taught the same course twice without making significant changes. I am also not a technophobe – I registered to a few MOOCs, and one of the reasons was to see how it is from the students’ side, and to consider using similar methods. I am not a grumpy old man who wants to keep doing things “the good old way”, I am a grumpy middle aged man who wants to do things *my own way*.

Everybody has their own way of learning and their own way of teaching. We should remember that we are humans and we are not to be engineered. We should continually listen to our students. We should – and this is important – really care about our students and about our job as teachers. We should also listen to our colleagues. I am happy to hear your ideas on how to teach, but please, don’t make it what it’s not. It is not a science, you haven’t *proved* anything, and you have probably done a very nice job. Institutional change should be done with care, and based on experience, careful thought, expertise in education. Pseudoscience will mislead us. I have seen attempts at defining “learning outcomes” and all kinds of measurables, and unfortunately this often leads to nothing more than some technocratic gibberish and a bit of additional paperwork.

In case that flipped classroom becomes the standard form of teaching, I wouldn’t be surprised if twenty years from now the traditional lecture will be re-discovered: a bold faculty member will try to give lectures, the students will be told that they must take notes, then solve exercises * by themselves at home* (wow!), and then this method will be presented in the teaching seminar. A couple of students from the class might also tell about their experience (“at first it was hard to get used to, but I think we learned much more this way!”) Some people will be skeptical, and some will embrace the new idea. I’ll still be around, and even through I will probably have gotten used to the flipped classroom years ago, I surely wouldn’t mind that they try their crazy ideas, so long as they don’t force me to teach like that.

Here is the content of the info page that I will be distributing:

**Topics in Functional Analysis 106433**

**Winter 2021**

**Introduction to Operator Algebras**

Lecturer: Orr Shalit (oshalit@technion.ac.il, Amado 709)

Credit points: 3

**Summary**: The theory of operator algebras is one of the richest and broadest research areas within contemporary functional analysis, having deep connections to every subject in mathematics. In fact, this topic is so huge that the research splits into several distinct branches: C*-algebras, von Neumann algebras, non-selfadjoint operator algebras, and others. Our goal in this course is to master the basics of the subject matter, get a taste of the material in every branch, and develop a high-level understanding of operator algebras.

The plan is to study the following topics:

- Banach algebras and the basics of C*-algebras.
- Commutative C*-algebras. Function algebras.
- The basic theory of von Neumann algebras.
- Representations of C*-algebras. GNS representation. Algebras of compact operators.
- Introduction to operator spaces, non-selfadjoint operator algebras, and completely bounded maps.
- Time permitting, we will learn some additional advanced topics (to be decided according to the students’ and the instructor’s interests). Possible topics:
- C*-algebras and von Neumann algebras associated with discrete groups.

- Nuclearity, tensor products and approximation techniques.

- Arveson’s theory of the C*-envelope and hyperrigidity.

- Hilbert C*-modules.

**Prerequisites**: I will assume that the students have taken (or are taking concurrently) the graduate course in functional analysis. Exceptional students, who are interested in this course but did not take Functional Analysis, should talk to the instructor before enrolling.

**The grade**: The grade will be based on written assignments, that will be presented and defended by the students.

**References:**

The following are good general references, though we shall not follow any of them very closely (at most a chapter here or there).

- Orr Shalit’s lecture notes.
- K.R. Davidson, “C*-Algebras by Example”.
- R.V. Kadison and J. Ringrose, “Fundamentals of the Theory of Operator Algebras”.
- C. Anantharaman and S. Popa, “An Introduction to II_1 Factors”.
- N.P. Brown and N. Ozawa, “C*-Algebras and Finite Dimensional Approximations”
- V. Paulsen, “Completely Bounded Maps and Operator Algebras”.

In this talk, I decided to put an emphasis on telling the story of how we found ourselves working on this problem, rather than giving a logical presentation of the results in the paper that I was trying to advertise (this paper). I am not sure how much of this story one can get from the slides, but here they are.

]]>**Time:** 15:30-16:30

**Date:** May 6th, 2021

**Title:** Distance between reproducing kernel Hilbert spaces and geometry of finite sets in the unit ball

**Abstract:**

We study the relationships between a reproducing kernel Hilbert space, its multiplier algebra, and the geometry of the point set on which they live. We introduce a variant of the Banach-Mazur distance suited for measuring the distance between reproducing kernel Hilbert spaces, that quantifies how far two spaces are from being isometrically isomorphic as reproducing kernel Hilbert spaces. We introduce an analogous distance for multiplier algebras, that quantifies how far two algebras are from being completely isometrically isomorphic. We show that, in the setting of finite dimensional quotients of the Drury-Arveson space, two spaces are “close” to one another if and only if their multiplier algebras are “close”, and that this happens if and only if one of the underlying point sets is close to an image of the other under a biholomorphic automorphism of the unit ball. These equivalences are obtained as corollaries of quantitative estimates that we prove.

This is joint work with Danny Ofek and Orr Shalit.

If you are interested in the zoom link, let me know.

]]>William B. Arveson was born in 1934 and died last year on November 15, 2011. He was my mathematical hero; his written mathematics has influenced me more than anybody else’s. Of course, he has been much more than just my hero, his work has had deep and wide influence on the…]]>

I came back to this old post, and noticed that it is almost ten years since Bill Arveson passed away. It’s hard to believe.

William B. Arveson was born in 1934 and died last year on November 15, 2011. He was my mathematical hero; his written mathematics has influenced me more than anybody else’s. Of course, he has been much more than just *my* hero, his work has had deep and wide influence on the entire operator theory and operator algebras communities. Let me quickly give an example that everyone can appreciate: Arveson proved what may be considered as the “Hahn-Banach Theorem” appropriate for operator algebras. He did much more than that, and I will expand below on some of his early contributions, but I want to say something before that on what he was to me.

When I was a PhD student I worked in noncommutative dynamics. Briefly, this is the study of actions of (one-parameter) semigroups of *-endomorphisms on von Neumann algebras (in short E-semigroups). The definitive book on this subject is…

View original post 1,794 more words

**Time:** 15:30-16:30 Thursday, February 18, 2021

**Title:** Quantum groups: constructions and lattices

**Abstract:** We will present a few constructions of locally compact quantum groups, and relate them to structural notions such as lattices and unimodularity, as well as to property (T).

Zoom link:

Next Thursday, January 7th, 2021, Michael Hartz will speak in our Operator Algebras and Operator Theory seminar.

**Title: How can you compute the multiplier norm?**

**Time: 15:30-16:30**

**Zoom link: **Email me.

**Abstract:**

Multipliers of reproducing kernel Hilbert spaces arise in various contexts in operator theory and complex analysis. A basic example is the Hardy space , whose multiplier algebra is , the algebra of bounded holomorphic functions. In particular, the norm of a multiplier on is the pointwise supremum norm.

For general reproducing kernel Hilbert spaces, the multiplier norm can be computed by testing positivity of matrices analogous to the classical Pick matrix. For , suffices. I will talk about when it suffices to consider matrices of bounded size . Moreover, I will explain how this problem is related to subhomogeneity of operator algebras.

This is joint work with Alexandru Aleman, John McCarthy and Stefan Richter

]]>On next Thursday the Operator Algebras and Operator Seminar will convene for a talk by Adam Dor-On.

**Title: Quantum symmetries in the representation theory of operator algebras**

**Speaker: Adam Dor-On** (University of Illinois, Urbana-Champaign)

**Time: **AFTERNOON Thursday Dec. 10, 2020 (NOTE: THE SEMINAR WAS POSTPONED BY ONE WEEK FROM ORIGINAL DATE).

(Zoom room will open about ten minutes earlier, and the talk will begin at 15:30)

**Zoom link:** email me.

**Abstract:**

We introduce a non-self-adjoint generalization of Quigg’s notion of coaction of a discrete group G on a C*-algebra. We call these coactions “quantum symmetries” because from the point of view of quantum groups, coactions on C*-algebras are just actions of a quantum dual group of G on the C*-algebra. We introduce and develop a compatible C*-envelope, which is the smallest C*-coaction system which contains a given operator algebra coaction system, and we call it the cosystem C*-envelope.

It turns out that the new point of view of quantum symmetries of non-self-adjoint algebras is useful for resolving problems in both C*-algebra theory and non-self-adjoint operator algebra theory. We use quantum symmetries to resolve some problems left open in work of Clouatre and Ramsey on finite dimensional approximations of representations, as well as a problem of Carlsen, Larsen, Sims and Vitadello on the existence of a co-universal C*-algebra for product systems over arbitrary right LCM semigroup embedded in groups. This latter problem was resolved for abelian lattice ordered semigroups by the speaker and Katsoulis, and we extend this to arbitrary right LCM semigroups. Consequently, we are also able to extend the Hao-Ng isomorphism theorems of the speaker with Katsoulis from abelian lattice ordered semigroups to arbitrary right LCM semigroups.

*This talk is based on two papers. One with Clouatre, and another with Kakariadis, Katsoulis, Laca and X. Li.

]]>As in other subjects of mathematics, when working on Hilbert function spaces, one sometimes asks very basic questions, such as: *when are two Hilbert function spaces the same?* *what is the “true” set on which the functions in a RKHS are defined?* (see Section 2 in this paper) or *what information is encoded in a space or its multiplier algebra?* (see the “road map” here). The underlying questions behind our new paper are *when are two Hilbert function spaces “almost” the same *and *what happens if you change a Hilbert function space “just a little bit”?* If these sound like interesting questions, then I suggest you take a look at the paper’s introduction.

Here is the abstract:

]]>In this paper we study the relationships between a reproducing kernel Hilbert space, its multiplier algebra, and the geometry of the point set on which they live. We introduce a variant of the Banach-Mazur distance suited for measuring the distance between reproducing kernel Hilbert spaces, that quantifies how far two spaces are from being isometrically isomorphic as reproducing kernel Hilbert spaces. We introduce an analogous distance for multiplier algebras, that quantifies how far two algebras are from being completely isometrically isomorphic. We show that, in the setting of finite dimensional quotients of the Drury-Arveson space, two spaces are “close” to one another if and only if their multiplier algebras are “close”, and that this happens if and only if the underlying point-sets are “almost congruent”, meaning that one of the sets is very close to an image of the other under a biholomorphic automorphism of the unit ball. These equivalences are obtained as corollaries of quantitative estimates that we prove.