Soft Skills

Albert Einstein and random thoughts on Machine Learning


I read Einstein’s biography with as much enthusiasm as I did with Stephen Hawking’s A brief history of Time and Domigos’ The Master Algorithm. It’s not only because the book is recommended by, among others, Elon Musk, but probably more because of my childhood dream of becoming a physicist back when I was in high school. Although I was too dumb for physics, nothing could prevent me from admiring its beauty.

The book was excellent. Walter Isaacson did a great job in depicting Albert Einstein from his complicated personality to his belief, religion, politics, and, of course, his scientific achievements.

As a human being, Einstein is a typical introvert. He was always a loner, enjoyed discussing ideas more than any personal or emotional entanglements. During the most difficult periods of life, he would rather immerse into science rather than get up and do anything about his real-life struggles. To quote Isaacson, “the stubborn patience that Einstein displayed when dealing with scientific problems was equaled by his impatience when dealing with personal entanglements”, those that put “emotional burdens” on him. Some may criticise and regard him as being “cold-hearted”, but perhaps for him, it was way easier to use the analytical brain rather than the emotional brain to deal with daily mundane affairs. This, often times, resulted in what we can consider as brutal acts, like when he gave Mileva Maric a list of harsh terms in order to stay with him, or when he did almost nothing for his first kid, and let it die in Serbia. For this aspect, though, he probably deserves more sympathy than condemnation. He was surely a complicated man, and expecting him to be well-rounded in handling personal affairs is perhaps as unreasonable as impossible.

Now, it is perhaps worth emphasizing that Einstein is a one-in-a-million genius who happens to have those personality traits. It does not imply those who have those traits are genius. Correlation does not imply causation 😉

Einstein made his mark in physics back in 1905 when he challenged Newton’s classical physics. He was bold and stubborn in challenging long-standing beliefs in science that are not backed by any experimental results. Unfortunately during his 40s, quantum physics made the same rationale, and he just couldn’t take it, although he contributed a great amount of results in its development (to this he amusingly commented: a good joke should not be repeated too often). His quest to look for a unified field theory that can explain both gravitational field and electromagnetic field by a consise set of rule was failed, and eventually quantum mechanics, with a probabilistic approach, was widely accepted. This saga tells us a lot:

  • The argument Einstein had with the rest of physicists back in 1910s on his breakthrough in relativity theory was almost exactly the same with the argument Neils Bohr had in 1930s on quantum theory, except that in 1930s, Einstein was on the conservative side. In 1910s, people believed time is absolute, Einstein shown that was wrong. In 1930s, Neils Bohr used probabilistic models to describe subatomic world, while Einstein resisted, because he didn’t believe Nature was “playing dice”.
    Perhaps amusingly, one can draw some analogies in Machine Learning. Einstein’s quest to describe physics in a set of rules sounds like Machine Learners trying to build rule-based systems back in 1980s. That effort failed and probabilistic models took advantages until today. The world is perhaps too complicated to be captured in a deterministic systems, while looking at it from a different angle, probability provides a neat mathematical framework to describe uncertainties that Nature seems to carry. While it seems impossible to describe any complicated system deterministically, it is perhaps okay to describe them probabilistically, although it might not explain how the system was created in the first place.
  • During the 1930s, in a series of lonely, depressing attempts to unify field theories, Einstein sounds a lot like… Geoff Hinton who attempted to explain how the human brain works. Actually, those are perhaps not too far from each other. The brain is eventually the 3-pound universe of mankind, and completely understanding the brain is probably as hard as understanding the universe.
  • Being a theorist his whole life, Einstein’s approach to physics is quite remarkable. He never started from experimental results, but often drawn insights at the abstract level, then proceed with intuitive thought experiments, and then went on with rigorous mathematical frameworks. He would often end his papers with a series of experimental studies that could be used to confirm his theory. This top-down approach is very appealing and became widely adopted in physics for quite a long time.
    On the contrary, many researches in Machine Learning are often bottom-up. Even the master algorithm proposed in Domigos’ book is too bottom-up to be useful. Computer Science, after all, is an applied science in which empirical results are often too emphasized. In particular, Machine Learning research are heavily based on experiments, and theories that justify those experiments often came long after, if there was any. To be fair, there are some work that come from rigorous mathematical inference, like LSTM, SELU and similar ideas, but a lot of breakthroughs in the field are empirical, like Convolutional nets, GANs and so on.
    Looking forward, drawing insights from Neuroscience is probably a promising way of designing Machine Learning systems in a top-down fashion. After all, human brain is the only instance of general intelligence that we known of by far, and the distribution of those generally intelligent devices might be highly skewed and sparse, hence drawing insights from Neuroscience is perhaps our best hope.
  • The way Einstein became an international celebrity was quite bizarre. He officially became celebrity after he paid visits to America for a series of fund-raising events for a Zionist cause in Israel. The world at the time was heavily divided after World War I, and the media was desperately looking for a international symbol to heal the wounds. Einstein, with his self-awareness, twinkling eyes and a good sense of humour, was too ready to become one. American media is surely good in this business, and the rest became history.
  • Einstein’s quest of understanding universe bears a lot of similarities with Machine Learner’s quest of building general AI systems. However, while computer scientists are meddling with our tiny superficial simulations on computers, physicists are looking for ways to understand the universe. Putting our work along side physicists’, we should probably feel humbled and perhaps a bit embarrassing.

It was amazing and refreshing to revise Einstein’s scientific journey about 100 years ago, and with a bit of creativity, one could draw many lessons that are still relevant to the research community today. Not only that, the book gives a well-informed picture about Einstein as a human, with all flaws and weaknesses. Those flaws do not undermine his genius, but on the contrary, make us readers respect him even more. Therefore Einstein is, among others, an exemplar for how much an introvert can contribute to the humankind.

For those of us who happen to live in Berlin, any time you sit in Einstein Kaffee and sip a cup of delighting coffee, perhaps you could pay some thoughts to the man who lived a well-lived life, achieved so much and also missed so much (although the Kaffe itself has nothing to do with Einstein). Berlin, after all, is where Einstein spent 17 years of his life. It is where he produced the general relativity theory – the most important work in his career, it is the only city he considered to be home throughout his bohemian life.

On “The Sympathizer”


I made great progress and almost finish The Sympathizer. There is only one final knot to be untied, but I would write something about it now, otherwise I will be too lazy when I am done with it.

Usually I am highly reluctant to read trending books. Books, especially fictions, are written to sustain the test of time, hence if a book is good for today, it shouldn’t be too bad 10 years later, otherwise it isn’t worth it. Therefore, it usually isn’t worth the effort to read a book when you are not sure if it would last for 10 years (or 5 years, perhaps).

The Sympathizer is different though. Reading a fiction of one of your countrymen in a foreign language is a pretty weird experience, so weird that I simply couldn’t resist, especially when I was in short supply of good Vietnamese books.

Without spoiling the content, here are a few random comments on the book. It was very enjoyable and turned out to be a good investment.

The story told in the book was inspired by many events that are not too unfamiliar with many Vietnamese. Even the way the war was explained, although totally different from the way it was taught in Vietnam, is in fact, well-informed and thoughtful. Therefore, if you are a self-respected Vietnamese who cares to learn about history more than what being taught in schools, the story wouldn’t be too surprising.

The surprise for me though, was the writing style. Being a debut fiction, the book was remarkable. Readers are left with the feeling that the author puts effort in every single word appeared in the book. He would use bachelor to describe someone in celibacy, or use naïveté instead of naivety, perhaps just to make the narrator sounds a bit more French. In other scenario though, he would use tummy instead of stomach, just to highlight the intimacy of the plot being told. Sentences are often short, but he does not hesitate to write sentences that are one-page long, sometimes just to make a point. I haven’t read too many fictions in English, except a few from Charles Dickens, Jack London and Dan Brown (yea, I read Dan Brown too), so I might be bias, but this kind of dedication makes the book a pleasant read.

Many people praised the book for its satire and sense of humour, but those probably come from the brutal honesty of the unnamed narrator, speaking of whom, is quite a unique character.

The narrator is a hybrid, whose parent is a French priest and a Vietnamese maid. During the war, he found himself being an assistant to a General of the Army of South Vietnam, although he is actually a sleeper agent of the North. Like any other human being, he has his own weaknesses, in this case being his bastard status, which drives him nut every time it is mentioned by other people. Having studied in the US, he consumed the Western values and culture. The whole book is, therefore in some ways, his fight to find his true identity, the true home that he really belongs to. These existentialist questions are echoed by the fact that the book was opened with a quote from Friedrich Nietzsche.

Having such a complicated background, readers could easily expect him to be quite a man they could possibly have a beer with. He would make smart, provoking comments on every single chance, from the name of the USA to that of the USSR, from sex workers to how dating works, from wines to guns, from Saigon to Hollywood, from military to, you bet, politics, philosophy and arts. He could draw, or perhaps more precisely throw, deep philosophical thoughts on seemingly random events and stories. Having seen everything from both sides, perhaps multiple sides, his opinions are well-informed, brutal and amusing at the same time. He would take every chance to reflect and show the differences, or correspondences, between Oriental and Western world, as part of his identity crisis.

I still have couples of chapters left to work on, and therefore haven’t seen everything from the book yet. However, if there is anything to criticize, I would perhaps be concerned about how naive the narrator was when it comes to his loyalty with the North. Just in the same way he cracked the American politics and culture, as well as the war, it would be amazing if he spends a bit more effort to expose the Communist side. That would make the book a fair treatment on many sides involved in this bloody war.

Moreover, although the author was tactically smart about where to let the story speaks for itself and where to make comments, sometimes he made too much of a comment, making some part of the novel a bit heavy and overdone.

Nonetheless, The Sympathizer was a good book. For many Vietnamese who are not yet exposed to the minuscule details of the Vietnam war aftermaths, this is certainly a good read. For others, this is a refreshing book that probably will keep them thinking for a while after finishing it.

On GPU architecture and why it matters

I had a nice conversation recently around the architecture of CPUs versus that of GPUs. It was so good that I still remember the day after, so it is probably worth writing down.

Note that a lot of the following are still several levels of abstraction away from the hardware, and this is in no way a rigorous discussion of modern hardware design. Still, from the software development point of view, they are adequate for everything we need to know.

It started out of the difference in allocating transistors to different components on the chip of CPU and GPU. Roughly speaking, on CPUs, a lot of transistors are reserved for the cache (several levels of those), while on GPUs, most of transistors are used for the ALUs, and cache is not very well-developed. Moreover, a modern CPU merely has a few dozen cores, while GPUs might have thousands.

Why is that? The simple answer is because CPUs are MIMD, while GPUs are SIMD (although modern nVidia GPUs are closer to MIMD).

The long answer is CPUs are designed for the Von-neumann architecture, where data and instructions are stored on RAM and then fetched to the chip on demand. The bandwidth between RAM and CPU is limited (so-called data bus and instruction bus, whose bandwidth are typically ~100 bits on modern computers). For each clock cycle, only ~100bits of data can be transfer from RAM to the chip. If an instruction or data element needed by the CPU is not on the chip, the CPU might need to wait for a few cycles before the data is fetched from RAM. Therefore, a cache is highly needed, and the bigger the cache, the better. Modern CPUs have around 3 levels of cache, unsurprisingly named L1, L2, L3… with higher level cache sits closer to the processor. Data and instructions will first be fetched to the caches, and CPU can read from the cache with much lower latency (cache is expensive though, but that is another story). In short, in order to keep the CPU processors busy, cache is used to reduce the latency of reading from RAM.

GPUs are different. Designed for graphic processing, GPUs need to compute the same, often simple, arithmetic operations on a large amount of data points, because this is what happens in 3D rendering where there are thousands of vertices need to be processed in the shader (for those who are not familiar with computer graphics, that is to compute the color values of each vertex in the scene). Each vertex can be computed independently, therefore it makes sense to have thousands of cores running in parallel. For this to be scalable, all the cores should run the same computation, hence SIMD (otherwise it is a mess to schedule thousands of cores).

For CPUs, even with caches, there are still chances that the chip requires some data or commands that are not in the cache yet, and it would need to wait for a few cycles for the data to be read from RAM. This is obviously wasteful. Modern CPUs have pretty smart and complicated prediction on where to prefetch the data from RAM to minimize latency. For example, when it enters a FOR loop, it could fetch data around the arrays being accessed and the commands around the loops. Nonetheless, even with all those tricks, there are still chances for cache misses!

One simple way to keep the CPU cores busy is context switching. While the CPU is waiting for data from RAM, it can work on something else, and this eventually keeps the cores busy, while also provides the multi-tasking feature. We are not going to dive into context switching, but basically it is about to store the current stack, restore the stack trace, reload the registers, reset the instruction counter, etc…

Let’s talk about GPUs. A typical fragment of data that GPUs have to work with are in the order of megabytes in size, so it could easily take hundreds of cycles for the data to be fetched to the cores. The question then is how to keep the cores busy.

CPUs deal with this problem by context switching. GPUs don’t do that. The threads on GPUs are not switching, because it would be problematic to switch context at the scale of thousands of cores. For the sake of efficiency, there is little of locking mechanism between GPU cores, so context switching is difficult to implement efficiently.
– In fact, the GPUs don’t try to be too smart in this regards. It simply leaves the problem to be solved at the higher level, i.e. the application level.

Talking of applications, GPUs are designed for a very specific set of applications anyway, so can we do something smarter to keep the cores busy? In graphical rendering, the usual workflow is the cores read a big chunk of data from RAM, do computation on each element of the data and write the results back to RAM (sounds like Map Reduce? Actually it is not too far from that, we can talk about GPGPU algorithms in another post). For this to be efficient, both the reading and writing phases should be efficient. Writing is tricky, but reading can be made way faster with, unsurprisingly, a cache. However, the biggest cache system on GPUs are read-only, because writable cache is messy, especially when you have thousands of cores. Historically it is called texture cache, because it is where the graphical application would write the texture (typically a bitmap) for the cores to use to shade the vertices. The cores cant write to this cache because it would not need to, but it is writable from the CPU. When people move to GPGPU, the texture cache is normally used to store constants, where they can be read by multiple cores simultaneously with low latency.

To summarize, the whole point of the discussion was about to avoid the cores being idle because of memory latency. Cache is the answer to both CPUs and GPUs, but cache on GPUs are read-only to the cores due to their massive number of cores. When cache is certainly helpful, CPUs also do context switching to further increase core utilization. GPUs, to the best of my knowledge, don’t do that much. It is left to the developers to design their algorithms so that the cores are fed with enough computation to hide the memory latency (which, by the way, also includes the transfer from RAM to GPU memory via PCIExpress – way slower and hasn’t been discussed so far).

The proper way to optimize GPGPU algorithms is, therefore, to use the data transfer latency as the guide to optimize.

Nowadays, frameworks like tensorflow or torch hide all of these details, but at the price of being a bit inefficient. Tensorflow community is aware of this and trying their best, but still much left to be done.

What Goes Around Comes Around: Databases, Big Data and the like

Over the years, I have the privilege of working with some pretty damn good people. One of those guys is a PhD in Database Research, used to be a professor, but apparently so passionate and so good at teaching that he eventually quits academia to join the industry.

He did an PhD in XML database, and even though XML database is completely useless, it doesn’t mean a PhD in XML Database couldn’t teach you anything (in fact, a good PhD could teach you quite many useful things). One interesting thing I learned from him was the evolution of database technology, which originates from an essay by Michael Stonebraker called What goes around comes around.

Michael Stonebraker is a big name in Database Research and has been around in Database research for a good 35 years, so you could easily expect him to be highly opinionated on a variety of things. The first few lines in the essay read like this:

In addition, we present the lessons learned from the exploration of the proposals in each era. Most current researchers were not around for many of the previous eras, and have limited (if any) understanding of what was previously learned. There is an old adage that he who does not understand history is condemned to repeat it. By presenting “ancient history”, we hope to allow future researchers to avoid replaying history.

Unfortunately, the main proposal in the current XML era bears a striking resemblance to the CODASYL proposal from the early 1970’s, which failed because of its complexity. Hence, the current era is replaying history, and “what goes around comes around”. Hopefully the next era will be smarter.

His work, among others, include PostgreSQL. Hence, after reading the essay, it becomes obvious to me why there are so many highly opinionated design decisions being implemented in Postgres.

The essay is a very good read. You get to see how database technologies evolved from naive data models to unnecessarily complicated models, then thanks to a good mathematician named Edgar F. Codd, it got way more beautiful and highly theoretically-grounded. After a few alternatives, a new wave of XML databases come back bearing a lot of complications. (Along the way, you also get to see how Michael Stonebraker managed to sell several startups, but that isn’t the main story – or is it?)

There are many interesting lesson learned. The most two interesting for me are:

  • XML database doesn’t take off because it is exceedingly complicated, and there is no way to efficiently store and index it using our best data structures like B-trees and the like.
    I learned XML databases and I was told that XML databases failed because it lacks a theoretical foundation like the Relational model. Now looking back, I think that isn’t the main issue. The problem with XML is that it allows too much flexibility in the language, so implementing a good query optimizer for it is extremely difficult.
    A bit more ironically, this is how Michael Stonebraker puts it:

    We close with one final cynical note. A couple of years ago OLE-DB was being pushed hard by Microsoft; now it is “X stuff”. OLE-DB was pushed by Microsoft, in large part, because it did not control ODBC and perceived a competitive advantage in OLE-DB. Now Microsoft perceives a big threat from Java and its various cross platform extensions, such as J2EE. Hence, it is pushing hard on the XML and Soap front to try to blunt the success of Java

    It sounds very reasonable to me. At some point around 2000-2010, I remember hearing things like XML is everywhere in Windows. It has to be someone like Microsoft keeps pushing it hard to make it quite phenomenal. When Microsoft started the .NET effort to directly compete with Java, the XML database movement faded away.
    One thing Michael Stonebraker got it wrong though. In the essay, he said XML (and SOAP) is gonna be the data exchange format of the future, but it turns out XML is still overly complicated for this purpose, and people ended up with JSON and RESTful instead.

  • The whole competitive advantage of PostgreSQL was about UDTs and UDFs, a somewhat generalization of stored procedures. Although stored procedures are soon out-of-fashion because people realize it is difficult to maintain your code in multiple places, both in application code and store procedures in DBMS. However, the idea of bringing code close to data (instead of bringing data to code) is a good one, and has a big consequence on the Big Data movement.

Speaking of Big Data, Stonebraker must have something to say about it. For anyone who is in Big Data, you should really see this if you haven’t:

The talk presents a highly organized view on various aspects of Big Data and how people solved them (and of course mentions a few startups founded by our very Michael Stonebraker).

He mentioned Spark at some point. If you look at Spark nowadays, it’s nothing more than an in-memory distributed SQL engine (for traditional business intelligence queries), along with a pretty good Machine Learning library (for advanced analytics). From a database point of view, Spark looks like a toy: you can’t share tables, tables don’t have indices, etc… but the key idea is still there: you bring computation to the data.

Of course I don’t think Spark wants to become a database eventually, so I am not sure if Spark plans to fix those issues at all, but adding catalog (i.e. data schema), and supporting a somewhat full-fledged SQL engine were pretty wise decisions.

There are several other good insights about the Big Data ecosystem as well: why MapReduce sucks, what are other approaches to solve the Big Volume problem (besides Spark), how to solve the Big Velocity problem with streaming, SQL, NoSQL and NewSQL, why the hardest problem in Big Data is Variety, etc…  I should’ve written a better summary of those, but you could simply check it out.

On “visions”


Sorry folks, this isn’t about him. This is about the other kind of vision that people keep talking about.

Over the years, I realised the word “vision” is a pretty good feature for detecting rubbish conversations. Every time when I heard somebody talking about “vision”, I almost immediately classify the conversation into the “bullshit” category, and I was almost always right.

Don’t get me wrong. I have no problem with them, it’s good for them to have a vision, really.

For example, in one of his remarkable interviews, Linus Tovralds compare Nikola Tesla and Thomas Edison, and he made the point when he said, although many people love Tesla, some name their company after him (you-know-who), Linus feels he is more “Edison” than “Tesla”. While other people enjoy looking at the stars and the moon, he enjoy looking at the path under his feet, fixing the holes on that path which otherwise he might fall into. Nonetheless, Linus has no problem with people having great visions.

Yea good guy Linus, when you make 10 million bucks a year, you wouldn’t bother to waste your neural cycles on those non-senses.

Leaders in the industry keep talking about their visions, which is totally understandable. That’s how leadership works. They need to show that they have some vision, so that people follow them, work for them (often underpaid), and help build their dreams.

That vision, however, isn’t necessarily only for others. When you are too rich, I guess it would take a grand vision (or two) to get yourself up every morning, to push yourself to work, to make you feel your life has some meaning in it.

In any case, it makes sense.

People who are building startups are probably the ones who talk about their visions most often. They need the money, they need to buy the investors, and for that they need a vision, or make up one if they haven’t got any. So for this case, it’s fine for them too, since they have a clear motivation to brag about their vision, which can be either extraordinary or total crap (or both).

The most annoying type of vision is the ones from Wannabe Entrepreneurs. Those could be mediocre engineers who are sick of their jobs but don’t dare to quit, fresh PhD students whose egos are overfed by the academia for too long, etc… I’ve met many of those. Their story often goes like this: oh btw, I got this awesome vision about improving X in Y by using Z. I think if we move fast, we will be the first. So I need to find somebody to realize this vision for me.

That’s the problem. Since when entrepreneurs are the ones who got a vision, but couldn’t do it himself, so he hires a bunch of people to do that for him?

I don’t think entrepreneurship works that way.

If you couldn’t do it yourself, why the heck would people believe that you could lead them to success?

Oh I hear you are saying that you actually could, but you wouldn’t, because blah blah blah (maybe because you are too good for dirty jobs)? Dear, like anything else in Engineering, it’s all about dirty work. It’s dirty work that drives progress. You can talk a lot about what you know (or pretend to know), but if you can’t deliver anything with your bare hands, then the joke is on you.

Of course, in some certain markets, at certain time, people with only vision and no action could still make money, and a lot of money. For instance, when the market are at peak for some crazy hype, then this kind of entrepreneurship might work. But I haven’t seen one, except some very ridiculous cases.

All in all, it’s good to have a vision. In fact I believe any serious people would have some form of vision in their particular area of interest. But the fact is nobody gives a damn about your vision. The only thing they care is your action, and results, if there is any.

So please, stop bragging about your vision. Start action. You might probably get something done in the future.

To conclude, it would be relevant to mention people who don’t bother bragging about vision, even when they have the authority to do so. Geoff Hinton once said in his lectures, that he isn’t gonna predict the future for more than 5 years ahead, because doing so is like driving on a foggy road. You couldn’t possibly see too far ahead, so you wouldn’t know behind the fog is the road or a brick wall.

Another example is Alan Turing, who said “We can only see a short distance ahead, but we can see plenty there that needs to be done.”.

And I believe he means it, really.


On the “Trumpocalypse”

I realized that I passed the point where I feel more comfortable writing in English than in Vietnamese, so I will keep doing this, until I feel this is too painful. Writing, for me, is a leisure activity, so I’m gonna do whatever I feel most relaxed.

So, on the event that is flooding all over social media. The following points kept lurking around in my mind for quite a while, and I think the most effective way to forget about them is just to write them all down (although I really wonder if it worths the effort):

  • It’s no secret that Mr. Trump lost the popular votes, but won the electoral votes, which is decisive for the presidency. This is not the first time it happens in the US, and once again, it shows that the Electoral College system in the US is, at least, flawed.
  • I heard somebody said the election was a great demonstration of democracy? C’mon, please. If anything, it is, at least, delusional. I am not sure why the US keeps a system that was invented several hundreds year ago, while other options are available. At least they could probably consider the election procedure implemented in most of Europe (which, by the way, is based on a fucking mathematical model).
  • Strange as it might sound, but I heard somebody else said one of the root cause of a massive support for Donald Trump is because of the expensive educational system in the US. Many Americans can’t afford for their (high-level) education, so over years the number of people with low-quality education accumulates, and they are easily fooled into aggressive, angry but empty statements. Although this might be true for some regions of the States (and I have seen similar phenomenons in some regions in Europe too), but recent exit polls in the US doesn’t really show this.
  • We could totally understand why Americans are sick of candidates like Hillary Clinton. Diplomatic, well-thought, well-positioned statements are often weak, and people are sick of this kind of leaders. But in a challenging world with so many different interests, being aggressive might often hurt than help. Some of the topics that Mr. Trump showed in his first 100-day plan already gives some hints of causing instability at the world-scale, and who knows what kind of events might come once he manages to deliver everything he promised.
  • One plausible consequence of this election is that the US Federal Government might turn out to be weak, and the State governments might have more actual control on the policies of their states. I am not sure how realistic this future might be, because the US Federal Government seems pretty strong so far, but for many strong states, following a super-conservative policy might be a bad idea, due to the lack of skilled workers, the raise of economic inequality, environmental issues, etc…
  • Last but not least, if the US cancels the TPP, it will be a very bad news for Vietnam. The Vietnamese economy is already suffering from corruption and low income (due to the low oil price), and it is probably surviving on the loans (e.g. from China last year). TPP was used to be seen as one of the the last salvages of Vietnam. Without TPP, I am imagining a Venezuela-style future for the Vietnamese economy (if you are in Vietnam, maybe it is a good idea NOT to put your money in the domestic banks).

The future might turn out a bit unstable for many of us, especially when the elections in Europe are coming this and next year, but in general, I believe the best we can do is to stay cool, focus on learning and delivering whatever the heck we promised. It is apparently the best investment for the future.

Truyện ngắn Nguyễn Huy Thiệp

Hồi xưa đi học, phân tích truyện ngắn là thứ mình thấy khó viết nhất. Giờ nhìn lại, thể loại này vẫn khó như vậy, nhưng vì giờ đã quá tuổi viết văn tới hơn chục năm, nên mình sẽ cố review cả vài truyện cùng lúc. Hi vọng là văn chương, sau hơn chục năm, không quá khó ngửi.


Tìm thấy quyển này cách đây vài năm, lúc đang lang thang trong mấy tiệm sách ở SG (đi mua sách tiếng Việt là chuyện mình hay làm nhất mỗi khi về nhà). Vốn ít đọc văn Việt Nam sau 1975, trừ vài quyển của Bảo Ninh, Chu Lai… nên cảm giác chung của mình là Văn thời này khá tẻ nhạt, nếu không viết về chiến tranh thì là văn tuyên truyền. Khi đất nước oằn mình đau khổ, lại còn bị đóng khung giáo điều, thì văn chương khó mà cất cánh được.

Với Nguyễn Huy Thiệp, đây là quyển đầu tiên mình đọc. Mãi tới gần đây mới đọc xong, và nói chung là rất ấn tượng. Lần tới về VN chắc sẽ tìm đọc Tướng về hưu.

Truyện ngắn nói chung là khó viết, và cũng không dễ đọc. Trong quyển này, có những truyện thực chất là tập hợp của vài truyện rất ngắn, nên mình đọc khá chậm.

Nhìn chung về nghệ thuật, mình hơi bối rối vì không biết nên đặt tác giả vào đâu. Có những truyện viết rất chắc tay, thủ pháp điêu luyện, kết cấu chắc chắn; nhưng cũng có những truyện hơi lỏng lẻo, hay thòng vào những câu triết lí đôi chút gượng gạo.

Mặc dù vậy, vẫn có thể nhận ra vài nét lớn trong văn chương Nguyễn Huy Thiệp từ quyển này.