It seems one of the primary artistic trends of the 20th century (with spillover, maybe the last two centuries) was the increased acceptance of sketches and raw, undeveloped ideas as legitimate “works of art” ripe for an audience. The beginning of the last millennium would not have indicated this, though– the market and prevailing aesthetic became awfully formalized and refined, to clientele richer and to art with corresponding burdens of grandeur. Art was occupied with things entirely other than presenting an idea. Most of it was advertisement and record-keeping for posterity. Here we have some Medicis, history’s greatest accountants.
A large portion of art was also used by the church to convey extant episodes of morality and ethics in the grand story of Christ et al. There seemed to be stricter rules, or at least expectations, in place about the conception and reception of artwork. Slowly that formality has eroded and now an artist can display almost anything, and almost nothing– a single line, an empty frame, a person standing still– to an audience, giving the latter more interpretive work to do.
Or so you’d think. Because instead of compensating for new forms by expanding its visual vocabulary, the audience has duly complied with the artists’ lead and become lazy at its end. Nowadays, the simpler a piece looks to the average gaze, the more likely it is to need explanation– and hence, justification. It’s obvious that if an artwork needs excessive analysis and explanation by experts on behalf of the masses, then you have a problem.
Perhaps this all conceals another truth: that what we call art has in fact changed its scope of services to society entirely in the past millennium. Before, it served even the poorest churchgoer, with strong, understandable language of form as its base, and practicality and relevance as its engine for communication. Since then, the art world has dwindled to serve, and with decisively less practicality, the intellectual accumulation of a select few.* In a way, though, this has given painters and sculptors the freedom and the license to conceive of almost anything, trying always to break that next ceiling in the endless skyscraper of taboo. The first steps of that: adoration of the sketch, and the slow and ultimately successful creep towards accepting things in a raw and newborn state. Here is a rough visual timeline, from the 1850s on:
You could read many things from this trend, such as the effects of war on mankind. But I am presenting these simply as visuals, as images experienced here and now.
With rawness now accepted pretty much as a style, it has begun translating into our daily lives. The artist’s sketch does two things for us: 1) it evokes those emotional responses that art is so good at doing, and better that something overly worked-on (and potentially heavy-handed) and 2) inspires us to say “my kid could paint that. Heck, I could paint that!” Steadily, it seems technology has given us just that ability. We are now given the option of entering a constant stream of auto-biography, with every next piece of technology promising things delivered in “real-time.” Compared to earlier when I had to wait months to get a book published, I can now type a little and click a little and have my book online and available to anyone within one night. With Twitter, I’m not even confined to my laptop anymore. There is increasing legitimacy given to the most mundane musings. Check out Conan’s Twitter feed.
Now before I make my next statement, picture language (that is: English etc.) abstractly, as nothing more than a tool or mechanism for communicating ideas. Like smartphones, it is a brilliant thing. But also like smartphones, it is imperfect. There are some ideas that the English vocabulary somehow cannot grasp, and on top of that, think of how laborious it is: before the person across from you understands your thought, you have to figure out what to communicate, communicate it, then they have to receive that sound (pretend you are just talking with no visuals– on the phone, for example) and process it themselves. On average this transaction takes roughly 2-5 seconds. Quite inefficient, isn’t it? Why that second processing phase? And isn’t there a way to actually make communication real-time? That is, I don’t have to find equivalents in the realm of words and gestures to convey my thought– I can literally put that thought in an interlocutor’s head.
Well, herein I make my prediction (which is extrapolated to occur sometime in the 100th millennium (100,000 AD): imagine moment in our evolution when we reach some level when we are able to communicate our thoughts to each other purely and instantaneously. In a way, it won’t be communication at all, because communication implies that primitive slog of processing and gesturing and so on. It will be more like us floating in an ether: an ether that receives a thought from one person in it and which would instantly be understood by everyone else in the soup. To get a feel for it, try thinking of something, then communicating it to yourself. You see, the need for communication is gone, because you had that thought in your mind already. It is maybe very similar to Nirvana. This, I envision, is going to be a huge step in the evolution of man. But that’s obvious, because you must have already concluded that. (Case in point?)
*Evidence: those de Koonings at MoMA (hanging for the benefit of the public). The reason they are up there is not because the public immediately got their underlying message 50 years ago, but because some highly reputable individuals did, and explained it to us, upon hearing which we collectively uttered “Ooooooooh…”
Furthermore, their intimacy is leagues separated from shock art featuring Jesus and heads of state, and encroaching on that feels completely at odds with their white-box homes. These paintings might belong in the studio.
:this, not this:
Though its schism from the church was a good thing in my opinion, there is one thing from that time that is missed in fine art: its ability to communicate powerful messages, cheaply and quickly, to and for the benefit of many. I guess film & television now hold those reigns.
In reverse as well, he is able to begin with an idea, crystallize it into a concept of sufficient scale, and name an artist whose work X in year Y best exemplifies it. In a discussion about public art, he raised a key consideration, one that critics often dismiss and the wider audience will probably not accept for decades more: that is the role of the artist as a politician. He immediately cited Chisto and Jeanne-Claude’s Wrapped Reichstag as the quintessential example thereof. This was not to say that the work had no power other than its political undercurrents: simply the feeling one gets from beholding such a sight brings shivers. It works viscerally and intellectually at the same time. But to conceive such a work it took, one could say, Christo’s entire lifetime, for he was born east of the Iron Curtain and had spent much of his life stateless. It was also just the nascence of a new Germany, nay, a whole new Europe, as the wall which divided two worlds all but ran right through the Reichstag gave the building a very heavy history indeed. It was thus the hundreds of delegates that had to be pandered to; the hour-plus debate; the incessant letter-writing, hallway-pacing, door-to-dooring, phone conferencing, and exhaustive voting (all while straddling the symbolic borderline between two worlds) that really represented the lion’s share of the artists’ efforts. Just the thought of a sole figure milling about a national capital with hundreds of obstructing skeptics negotiating his way to a project behind closed doors feels enough to me (had his efforts been unsuccessful the attempt may have remained influential). It also opens up an entirely new tradition– the politics of art, singled out and independent of the final product. A means to justify the means, with no end in sight.
I have an idea boiling in the morning water now as well, which follows this tradition. It begins as a desire to celebrate filmic images (esp. independent of words). I collect screenshots of films, and store them in a personal space. Then I locate the studio / production company that owns the rights to it, and negotiate their release to the public realm. There is a great deal of tension nowadays between the unreconciled desires for free access to everything on one hand and discreet distribution for a profit on the other. This work would clearly push for the former, as I believe the latter is part of a dying model. Finally, upon each film’s liberation, its respective stills that I have stored are distributed as T-shirts representing the film’s ultimate coming-out. It would also be a fun guessing game: which movie is this screenshot from?
OK, OK fine: Blade Runner, Nosferatu, The Draughtsman’s Contract.
This may be half-cooked conjecture, due mostly to its scary simplicity, but it bears notice.
To begin with, the reason our hemisphere is colder during winter months is not because the sun is further away than during summer (in fact, due to the ovular shape of Earth’s orbit around the sun, we may in fact be closer in winter) but because of the average angle of the sun normal to the Earth’s surface. As one may notice, the sun doesn’t rise as high into the winter sky. This is because of the Earth’s angle of declination. We do not spin upright as a freshly released top, but at an angle of close to 23.45 degrees. Here’s the perfect animation we’ve always wanted to see. Due to this angle, we in the Northern Hemisphere are tilting away from the sun from October till April. As we have all accidentally experimented at some point in our lives, blowdryers exponentially decrease in effective heating power the further from perpendicular you tilt them. Maximum heat gain occurs when the direction of radiation waves are perpendicular to the incident surface (same is true, by the way, for magnetism). But before I move on to the next paragraph, I remind you that one of the best strategies for passive (that is, without the need of mechanical distribution systems) solar heating is to locate all glazing and heavy materials such as concrete on the south facade of a building. This side gets the most direct sunlight and heats up the most in the daytime.
Until recently, Earth’s surface has remained primarily horizontal. This is an assumption we base the above evidence on. However, the past 5 years that have witnessed the beginning of an urban majority (the percentage of people living in cities as opposed to rural areas has crossed the 50% mark) bring to some the vision of a planet covered completely with cities. What would that mean? Well, it would mean many things. Many scary things. But singling one of those out: it would mean that a majority of the Earth’s surface would now be vertical. Think of the reason our livers are so bumpy on the inside, why cells have cilia, or why trees grow leaves. It’s all to increase surface area. Now imagine the net surface of the city-blanketed Earth. Most of that surface is going to be vertically oriented. Suddenly, the surfaces incident to the rays of the winter sun are more perpendicular than they were 200 years ago (or, closer to 90 degrees), and those areas are actually getting hotter in winter. That would then mean, of course, the opposite in the summer when the sun is high. Most solar radiation is hitting the surfaces at a very low angle, resulting in very little heat gain. Add to that a building’s thermal mass and you have one screwed up season cycle.
This core concept of the sun’s angle relative to a majority of the surfaces in our microclimate adds spice to the already-proven and maligned urban symptom of heat island effect.
There’s my morbid hypothesis.
I discovered very intimate evidence that our brains are always aware of the time– and not just by casts of a net or any such wide margins, but down to the very minute. And it is only exhaustively heightened by images. In this peculiar case it was an instance of realizing something that had been present all along as opposed to an uncovering. The former turns out to be more haunting, and, of course, more interesting to analyze in retrospect.
One Saturday I happened in on my parents re-watching an episode of Columbo, Double Exposure, from 1973. In this one, the murderer manipulates his victim with clever use of subliminal cuts placed in a short film– single frames which the eye detects but does not consciously process, in other words, that one does not “see” (thus confirming our long-held suspicion of advertisements). The whole premise is rather nebulous, but we entertain the horror of it just to allow Columbo one more thing.
I had sat down, engrossed. But by pure coincidence (or was it) I realized it was after 9pm and I had to go. Exactly 11 minutes after 9. (My father loves being dramatic, and pretends to freak out every time he sees the clock read that for the past decade.) With thoughts of subliminal cuts still fresh on my mind, I thought there had to be something related between them and the seemingly increased amount of times the clock says 9:11 whenever I check it. It’s strange, after all, that a number that 15 years ago was fairly arbitrary (“nine-eleven,” as opposed to “nine-one-one”), should in any way become less objectively arbitrary and actually increase in gravitational pull, as it were, towards it, after some incident. The explanation for that would be fairly clear: it’s not to do with the number itself, but the amount of times it’s been printed and spoken (and thus– read, seen, and heard) since then. I wish it were easier to dig up, but surely there must be some statistics on the number of times “9/11” has been mentioned or printed by the media since that day.
Since then, those three numbers and four syllables have become so charged that merely uttering them opens up a whole series of unconscious responses in our brains. And this is where I make a bold assumption: I think our brains try and force us to bring those responses. Why? Perhaps our minds will do anything to feel more at one with the world around us (that is– having stimuli to react to). Or maybe it carries a unifying spirit, a combination of anger and camaraderie that binds some of us together and paints others as enemies. A quasi-religious, wholly human trait. To make sense of the world, we need friends and enemies. We create good and bad, right and wrong, to guide us. So our brains use this simple tactic of somehow making us that bit more nervous, reminding us to check the time, in order to identify patterns and paint a picture, which speaks thousands of words to us, organizing and giving purpose to these isolated responses relative to a collective whole.
Surely the media has thought this through already….
It’s not hard for the brain to know what time it is. Forget the internal mechanism, I’m talking about down-to-the-minute accuracy. Think of all the places and times that two numbers with a colon in between occurs in your field of vision. Between waking up and sitting down at work, I see my bedside clock, my laptop screen, VCR, clock in the kitchen, New York 1, NPR, cell phone (about once every 5 minutes), microwave, church bell tower, train ticker, useful little news screen in my elevator, even my work telephone. Each display the time, and these are all within one hour. Of course my brain is going to know when 9:11 is. I also happens to fall right on one of the most stressful times of the day: the beginning. I realized, horrified, on the train platform, that even with the train display and the courteous robot PA voice, people still lean out to check for themselves if the next train REALLY IS far away and still out of sight. I couldn’t bear the stress of checking the time the second I emerge from the train station. What difference will it make? I’m going to walk the same distance at roughly the same speed anyway. I had to return to my Scandinavian roots: where some clocks are made that look like this:
The idea being that you don’t ever need to know EXACTLY what time it is, to the second. In fact, maybe even the minute hand is excessive. One need just to look at the clock and realize, “Oh, it’s time to eat.” or “Oh, it should be getting dark soon.” And here I had always wondered what those silly log clocks were for, with that one short hand that never seemed to move.
“Can you miss a plane by 5 seconds?”
The second hand is something of an invention, something we need just to indicate that time is passing.
This was all too much for me.
So I stopped wearing a watch.
Oftentimes we find ourselves, as creators, at odds between extremes– streaming either towards conceiving of something purely of our own, something unique, or towards ‘creating’ something ‘plucked’ from the surrounding world, with as little interference on your part as possible. Both are legitimate. Both present debatable ideas. Both are also impossible.
As I have already stated, there is no such thing as pure creation, out of nothing. (Our definition of concepts like space clue us in on this: O. F. Bollnow refers to Aristotle and the German language [“Raum”: creating a clearing, esp. from a forest and esp. for settlement.] to produce proof that space is not at all infinite and objective, as Newton said, but rather more like an infinitely thin membrane which surrounds life. Lived space. Space experienced. Human space.) Every work arrives at this stage; when it needs to decide: footnote or parentheses, and address this duality, as it has entered the collective eye of the audience and they will inquire of it eventually.
Creation has always, since its birth, included reference and copying under its umbrella of legitimate bringers of form. Many times in the history of the western world have people rediscovered an interest in the worlds of antiquity (rediscovered) and begun reviving aesthetics therefrom, so much so that it changed the course of the status quo. If artistic progress were a road, would these revivals be considered a U-turn? A backing up? An empty tank of gas? I think they are neither, for these carry a negative connotation. As I said, certainly every work that has been created ever since ever has had some precedent. In a morbid, very Lacanian sense, art, as the “self,” can never not have precedent. As a necessary aside, this finding of a point of no precedent (something more interesting than the Big Bang) is another chain of thoughts. As Nate Harrison mentions late in the recording, even the first fire witnessed by man wasn’t without preexisting conditions. We should be thankful for these conditions, because they are what give us limitations, real or imagined, and which provide that fantastic freedom-within-constraints which is so vital to the human imagination.*
Nate Harrison starts to skirt the surfaces of a crisis around 7:06. He mentions “fetishization” at 7:26. Hooray.
At a meeting today, the question came up of “do we make this building actually mobile, and moveable, or do we make a solid piece of architecture that implies or symbolizes time, and change, at a less perceivable rate, and those who can readily read architecture will see the implicit gesture?” I said, “Think of hundreds of years ago when architects were probably considered idiots for dreaming of moving buildings. They had to come up with inventive ways of indirectly expressing it without being literal.” (We do after all use adjectives like ‘swooping’ ‘soaring’ ‘floating’ and others for old buildings even though they don’t actually. This is the power of the imagination, both as a force that creates these illusions and as a lens which allows for the interpretation of those adjectives in the final product.) The first thing we learn about Queen Hatshepsut’s Temple is its apparent emergence from (and camouflage with) the mountains behind it. Of course it isn’t literally doing that but the evocation is strong.
We know for a fact that the Eiffel Tower isn’t about to launch itself into outer space, but it certainly gives that vertical thrust of a feeling, even from a distance.
Then, of course, we get the buildings that are too wannabe-moving without actually being moving. When I see Frank Gehry’s buildings, I am starkly disappointed upon realizing that it doesn’t do what it looks like it can, because the illusion is so sought after perfecting that my imagination gets screwed with and upon entering and seeing right angles, familiar materials and construction methods, I am angered that I was led on like a baby getting boo-boohed and gah-gaahed.
This kind of architecture pays no heed and plays no tag with the human imagination, as architecture should. The spectacle is glorified and the ideas are so juvenile that I am left to wonder whether this architect really considers the wholeness of the building, the relation of inside to outside, of plan to elevation, of assembly to detail.
Architecture is an art that is primarily immobile. Still is. Granted, mobile buildings have been built and the result can be fantastic. But technology is still prohibitively expensive to accommodate such construction and entertaining these ideas of buildings that embody these verbs literally as opposed to evoking them poetically should, for all intents and purposes, not leave the napkin. Near the end of our meeting today, we agreed that there’s a give-and-take– the more moving parts the building has, the less iconic and sculptural it becomes. We are not there yet. We cannot build buildings in our likeness. But that’s one reason why we have this fantastic imagination thing.
A tug at the anchor chain…
The reason I brought up buildings is that it is a direct analogy of the sampler and what it did to music in the 80s. Instead of evoking and referencing styles and songs and other artists indirectly through one’s own creations, you could now reference things directly. Use musical quotation marks. I’m afraid there is something of the human imagination that is left behind when we abandon that purity of creation and shift to direct recycling. It of course traces its roots back to quotation in the textual sense, which may have been the first time (and the easiest) that someone appropriated someone else’s work as their own. But I can think of an earlier example– the first teacher.
Teaching, and handing something down is an artform no matter what the substance, is inherently about copying and repetition. Repetition, in fact, to the utmost extreme. The closer you get to the original, the better, and the more effective the teaching. It is also the struggle to attain the unattainable [an ideology], as with each passing-down there is an iota that changes and is subjected, again, to the onslaught of the imagination, and in the larger sense, to our brain’s tireless thinking of thoughts. In the failure to pass something on verbatim and as an exact duplicate one discovers the glory of the conscious mind. Teaching is the backbone of progress. Imagine the history of human creation compressed into one human life. Imagine you were taught nothing. How would you communicate? When would you learn to use tools? Express your ideas? Teaching and the faculty of passing down information has become all but an instinct (a logical offspring of the instinct driving communication), and it is a fabulous example of the significance of the new extremes homo sapiens must take stands between: between faith to the story and faith to lucid communication.
Nate Harrison sounds to life the anchor and calls this, at 11:10, “the spirit of a pledge to new forms.” And later, “of potentials for new connections and meanings.” I would love to pose some core questions about why we search for meaning. But that’s a sort of second volume. So later.
The large number of questions posed at the end only reflect the unresolved state of the piracy and free speech debate, which SOPA and PIPA have just thrown to front stage. And ends the recording with a pedagogical technique befitting its topic: a quotation. One that rings true and is one of those truths that’s like the darned sun and just won’t go away every time we pray the glare away. Kozinski really hit the nail on the head with his “accretion” comment. And, any judge whose final comments in a high-profile case (contesting Barbie’s status as a sex object….) are “the parties are advised to chill” gets a huge plus in my book of don’t-take-anything-too-seriously-ism.
*See post entitled [Reference]