David, I posted a comment to your blog about declarative programming. This updates my current thinking on our past discussions. What I didn't realize that my prior email to you, was essentially driving to the Actor model. My comment apparently ended up in Wordpress's spam queue, so I have duplicated it below.
1. HTML is commutative when employing CSS 'position:fixed' or
'position:absolute' with distinct 'z-Index' properties.
2. Declaring an environment-independent order (e.g. not time-dependent) is a valid declaration that doesn't violate the concurrent and reactive properties, and thus should be allowed to selectively disable the commutative property. CSS 'position:relative' is declarative ordering with respect to the parent, analogous to a function call or an attribute argument list of the parent declaration.
3. HTML is not idempotent, not even when employing duplicate 'id' attributes, because duplicate 'id' attributes are illegal and have undefined behavior.
4. Conceptually analogous ditto #2 for #3, idempotence could be selectively disabled, for as long as there is no environment dependency and thus concurrent and reactive properties are not violated. For example, declaring an event handler that creates a new object. Otherwise how can declarative programming create new objects?
5. How can declarative programming most generally interopt with environmental state, i.e. what is the general model of declarative interoption of modules?
Assume a referentially transparent (RT) expression, that calls a pure function and that function inputs an immutable copy of the environment and returns a modified copy of the environment. Assume an non-RT expression which contains that RT expression and stores the value of the returned environment. This non-RT expression is equivalent (except for atomicity of multiple changes) to calling an impure function that inputs an immutable copy of the environment and within the function writes state changes to the environment. It is critical that this impure function does not read changes to the environment, i.e. it is pure other than written side-effects to the environment, thus the concurrent and reactive (and even parallelism) properties are not violated.
I have written about how to implement this with type safety using Scala dependent method types.
This impure function is analogous to an Erlang-style Actor. It sends messages (writes) to the environment, and responds to messages (reads) only from its arguments it is called. The key revelation of Actors is they interface with the environment declaratively, in that they can exchange state only via incoming messages and that the order messages are received is asynchronous (i.e. undefined, thus sends and receives are commutative).
An Actor that reads mutable state (not from the input message arguments), even if it is persistant (between input messages) state, is not pure and thus not reactive and thus not declarative. The read expressions are not RT, and thus are not modular for composition. This Serializer type of Actor is necessary to do coordination or to have deterministic persistant state (there is dynamic, stochastic and potentially resilient state in the distributed actors). Note that message sends are RT within the scope of processing a single input message, and thus are modular for composition of the programming an Actor's message processing function.
6. Persistent immutable data structures with structural sharing are the most efficient for exchanging mutated state. Rich Hickey discusses this (see 23:50min). The atomicity of state changes in the Actor model are handled either by the fact that a message receive is atomic, or for multiple coordinated Actors using an intermediary Serializer. The Lost Update Problem should be handled in the messaging protocol, e.g. "add to list if not already in list" or "write this value, if current value is this old value, else..." (include an edit id in immutable structures for faster distributed equality comparison).
7. Comments on Hickey's criticisms (see bullets in the "Message Passing and Actors" section).
* Deadlock should be impossible because there is no way for an Actor's message processing function to lock a resource on another Actor and wait before replying to the input. A deadlock requires two process to lock the same two resources in opposite order and wait for the locks. It is true that timeouts (stored in a Serializer for persistence) will be necessary on expected callback replies, which makes more resilient and modular programs, even if the persistent (shared between Actors by persisting between input messages) state is all on the local machine (e.g. crashed modules). Non-persistent local data can be r/w directly with no messaging. A non-serializer Actor's message handling function is pure (except for sending msgs to other Actors), thus it can be serialized to send it to a distributed instance, thus the bifurcation should not occur and should be handled in the Actor library.
* Afaics, the Actor library can be just as efficient between local Actors. I showed that function calls are equivalent to message passing in #5 above.
* Afaics, it is good modular, resilient design. The Serializer Actor enables coordination.
* Afaics, the Serializer Actor can be a local persistent state store to give same non-distributed semantics.
8. Comments on David Barbour's criticisms (see bullets in the "Some ActorsModel Skepticism" section).
* Why can't a timeout be placed on expected callback reply to a message? This can be stored in the Serializer. As for the code for the non-serializer Actor, this is a caching not a garbage collection issue, since it can have only one value. Distributed values are copies and can be discarded immediately upon return of Actor message processing function.
* What is wrong with a persistent serializer?
* In my vision, there is a serializer that is coordinating which Actors are responding to which events. So the CPS could update the behavior for an event at the Serializer.
9. I am adding the concept of a Future, which encapsulates a callback as an immutable value, so that multiple callbacks can be multiplexed as Future objects as arguments in pure functional composition within the Actor's message processing function. When the value is needed, it will callback on function supplied to the Future.get and then return from the Actor's message processing function.
Copute doesn't need to be commercially viable, just needs to attract enough use to advance the language to being a professional tool. I am creating Copute, because I am sick of all the other languages. There are few critically important things I am fixing with Copute. Once I have the tool I want, then I will begin to create commercial products again.
I had a breakthrough at 3am in a half-dream state. I realized "never lock, always fork (copy)". Distributed version control showed us that "resource contention" should be resolved socially, not by forcing centralization:
"I know, it’s strange... since 1972 everyone was thinking that we were manipulating versions, but, it turned out, surprisingly, that thinking about the changes themselves as first class solved a very important problem: the problem of merging branched code."
"The locking model was founded on two assumptions. First, that that modification conflicts would be frequent and severe enough that they could only be... ". ...automated. I think Eric is wrong. The problem is always that resource conflicts are a social issue, it is just that we haven't designed our data structures correctly. For example, if I want to post a tweet, it should no go in a centralized DB, but rather in my DB, and then my DB should send a notification to everyone on the watchlist in my DB. No locks needed.
I also realized how to handle the typical drag-n-drop event handling case without any impure functions. Click here for an impure way of doing it, that illustrates the problem (note Adobe software is cited as a use case).
The key is to put the list of events being watched in a data structure that is the input and output of the handler function. This data structure can be set into a GUI widget at construction time.
So the big win, is that I don't need deadlock prevention, and I can make the Copute language entirely pure. Thus I can simplify many things, such as never will there be an if without a matching else, which solves the dangling else ambiguity of impure programming languages. And I can eliminate many concepts, such as setters and getters. This was the big design win I had expected from the beginning, but when I was studying how to handle events purely, I somehow got sidetracked from my original thinking that it could be done with pure functions.
Note there are advances in making immutable data structures as efficient as mutable ones.
Regex are finite automa, not open to new states. They are static, and not dynamic.
It is interesting example of inverting a problem, when we convert an unsolvable local contention (e.g. local locks can never be guaranteed to not deadlock in a distributed case) into a global solvable one, as I explained previously. It seems conceptually analogous to how we use limits (i.e. approaching a point but never reaching it) in definite integrals, because when we find the area under a curve, we can't do it by sampling the area under each local point on the curve, because there are infinite such points, i.e. an unsolvable problem. This is because a point on a curve is infinitesimally small. And the inverse of the integral (derivative) gives us the slope at a point on the curve, without having to take the slope between the two points closest to either side of that point (because again there is no such two points, as there are infinite points between any two points on a curve).
Thus I had the thought that the essense of limits are to make the shape of the function's curve (slope) first-class, i.e. we don't even care if the functions have a value at the limit:
Specialized processors are more efficient in terms of silicon expended for the task they do, so the equation is they are desired when the task they do is frequent enough that the silicon efficiency gained is positive overall, i.e. the duty cycle of the specialized task must be factored in (as well as other considerations such as I/O load on the general CPU, etc).
That silicon (and energy) efficiency equation can be applied vice versa, in that making the general CPU simpler (e.g. RISC) can lead to greater silicon efficiency, because the complex instruction sets have a lower duty cycle.
I expect that for some years or decades the number of cores will continue to increase in line with Moore's law. Some ideas I have seen using a material other than silicon, 3D circuits (Intel’s tri-gate technology), and making cores simpler.
Ultimately the amount of processing we can fit in a very small gadget will reach a limit:
However, there is solution to this. With near-field radio (e.g. Bluetooth, etc), we can put more cores some where on our body or clothing and offload processing from the gadget we hold in our hand. Hopefully we can charge it without wires too, so we don't have to think about it. Hopefully these spare processors become so cheap that they come standard in clothing and shoes, etc..
Identifying parallelism in software is not the same as concurrency:
And we can coax parallelism into a series of conditional operations on sets, by using a State monad and Traversable:
End-to-end principle in two words: maximize degrees-of-freedom.
Degrees-of-freedom is a form of potential energy.
Noise is signal or vice versa, depending on the relative perspective (resonance) of the observer with the transmitter.
Don't fight the Second Law of Thermodynamics, which says the entropy of the universe trends to maximum (meaning *independent* possibilities).
I am very happy to see that even the creator (promoter) of the term "open source" is now running into the problem I intend to solve:
And to achieve that granularity, the language was must be high-order typed (e.g. monads, etc), and Python is unityped (no compile-time typing).
I know what I am doing. I just need to execute more efficiently than I have been! (and I mean more efficiently!)
And to achieve that granularity, the language was must be high-order typed (e.g. monads, etc), and Python is unityped (no compile-time typing).
I know what I am doing. I just need to execute more efficiently than I have been! (and I mean more efficiently!)
"The Catheadral and The Bazaar doesn’t offer any solution (proposed business model) for this."
The business models were presented in the Magic Cauldron, not CatB. There is a section "When to be Open, When to be Closed":
It states clearly that we share knowledge where the return on sharing (in the Inverse Commons) is greater than the return from hiding the knowledge. In the case of shared knowledge collaboration that doesn't organically materialize due to the transaction (mostly defocused communication overhead in this case of GUI stack programming) costs being greater than what a focused individual or small team can accomplish, then the Theory of the Firm predicts the corporation to be profitable by managing those costs, i.e. closed source wins. I covered this in my recent overview of the knowledge age:
My takeaway from this is that as your programming language models become modular, the closed modules can be extracted with more granularity, thus sharing more modules. Eric mentioned this in the "Reasons for Closing Source" section:
http://www.catb.org/esr/writings/cathedral-bazaar/magic-cauldron/ar01s06.html"The separation of function would enable you to guard the crown jewels (the schema) while getting maximum benefit from open-sourcing the engine"
The complaint is more appropriate for lazy languages.
In a lazy evaluation strategy (a.k.a. call-by-need) language such as Haskell (which happens to be the language strives for the most strict pureness of functions), memory space and execution time leak determinism is quite opaque from the programmer's abstract model of the computation (thus can appear random or indeterminate), because the execution of functions does not proceed in the written nested order, but rather determined at run-time by the input to the program.
Whereas, in a eager evaluation strategy (a.k.a. call-by-value) language (as most other popular languages are), these systemic side-effects are transparent at compile-time and thus are significantly orthogonal in most cases (in the programmer's mind) to the abstraction contract that the pure function provides.
Is an example of declarative state as follows?
if mode and mousedown and joyleft: ...
Where the equivalent event model with local state would be as follows, assuming we have CPS transform for inversion-of-control.
joystate = none
while true: joystate = wait joychange
while true: if mode and wait mousedown and joystate == left: ...
The former can be implemented without local state either as a callback that runs periodically (e.g. picosec resolution) with the current states of mouse and joy as inputs, or watching for events on both mouse and joy, with the state of the other as input to the callback.
What I am trying to illustrate is that there is always an event occurring some where in code, the distinction is all about how we express the semantics of the program. Declarativity is about tighly coupling intended semantics to the programming model.
To whatever extent you avoid eventful abstractions, there is (literally) less code you can point at and call eventful.
Correction accepted. Follows an eventful and a non-eventful example.
if menuishidden and mousedown: showmenu
else if menuisshown and nothover and mouseup: hidemenu
If hover or mousedown: drawmenu
The eventful example is toggling local menu state, thus is not spatial idempotent nor resilient. I refer to your blogs Comparing FRP to RDP and Local State is Poison.
It is not enough to feed external state to a FRP, because local state can still be semantically achieved even it is accessed from a pure function.
The non-eventful example depends on a mousedown state change, yet the signal's result (whether menu is drawn) does not depend on catching the exact instant when the mouse went down in order to synchronous all the local state variables (an exponential permutation of complexity).
We should think of the local state variables as infinitesimally small samples of time, whereas the mousedown state variable is pseudo-continuous in time (discrete at greater than the picosec timer resolution).I reject the popular, vague, useless idea that “intent” has any impact on whether expression of a model is “declarative” or not. My definition for declarative is based on objective, formal properties of expressions.
Disagree. For example, multiple duplicate instances of loop control in any order with same inputs will yield the same outputs, yet it is less declarative than functionally expressed equivalent. To argue that HTML is less declarative because it doesn't have the unwanted noise of global instance ids seems counter productive to me.non-declarative languages have extraneous semantic properties, such as order of expression, that poorly align with abstractions in the problem domain
The real culprit appears to be local state (you realized that recently with Local State is Poison), a.k.a. in the real world as bickering over local perspective. Eliminating all global state and order is the antithesis of reality. As a practical existential matter, there is no such thing as global state, because (we can't instantiate infinity so) the bottom type is not instantiable (in a strict language and dually for top in a lazy language).Tight coupling between intent and semantics is explained by having useful, problem-specific abstractions. Using ‘declarative’ as a synonym for ‘domain specific’ seems a waste of a fine word.
Your concern is a conflation of domain-specific declarative with general purpose declarative models, i.e. whether a problem space is domain constrained or general purpose is not related to whether the offered solution is declarative.
More semantically correct.
while hover or mousedown: drawmenu
The distinction between `drawmenu` and `showmenu` is the former doesn't save the boolean state of whether menu is shown.
You don't really want to eliminate events, e.g. your button state change is still an event, rather you want to eliminate callbacks, because they can't be composed without local state and unavoidable ordering (since each callback and local state represents an infinitesimally small period, i.e. a discontinuity).
By feeding the state changes paired with timestamps into a pure function as inputs and receiving any changes in the output, events can be composed over non-infinitesimally small periods of time, i.e. sampling multiple input states over a window of time that smooths the aliasing error in the system.
This has not eliminated events, rather eliminated local state. For example, the output pane of my HTML introduction has to be reparsed on every change to the editor pane. The onchange event is still an event whether it is implemented with a callback, or as a pseudo-continuous state change input to a pure function.
This is why my original comment noted that there is a still an event occurring some where. You are eliminating local state not events. Had you explained it clearly like this last year, it would have saved me a lot of time trying to figure out the essence of your idea. Now that I understand the simplicity and elegance of your idea, I like it.
Hope that helps.
DEFINITION OF DECLARATIVE PROGRAMMING
The reason your idempotent and commutative definition of declarative is meaningless to me, is because if there existed a programming language where adding duplicate instances and reordering instances had no effect, then the programming language would always do precisely one behavior for any set of instances regardless of the duplicates and ordering. I have decades of real world experience with programming and I look around me in the real non-programming world, and I can't think of any correlation with that definition of declarativity.
In the real world there are many cases where the order and/or duplication that is present does not matter to the task at hand. For example, it doesn't matter to me what order the 5 potatoes are counted that I need to cook french fries. The declaration is "fetch 5 potatoes". However, I do need to declare "fetch 5 potatoes" before I declare "slice the potatoes". How did it become less declarative in a way that matters, when I just violated the commutative property?
That is why I assert that idempotent and commutative restriction only applies to those details which are not relevant to the semantics. So the order that the potatoes are counted and whether some are exact copies, should not affect the outcome of my "fetch 5" declaration. Thus my language is declarative.
You conflate domain-specific with declarative in the sense that you imply that all languages that are tightly coupled to the semantics of the intended domain, will be declarative. Designing a language to be tightly coupled does not necessarily make it declarative, because this a complex design problem that does not Halt. Domain-specific refers to targeting a language to a domain.
Declarative refers to (the degree of) achieving that target, i.e. how well the domain-specific (or general purpose) language enables one to express only intended order and duplication in the domain (or general purposes) and not unintended order and duplication.
As another evidence that your definition is vacuous and circular, consider that it is possible to code in a low-level declaratively structured language that creates unintended order and duplication in the high-level semantics.
Declarativity is only meaningful in the way I have defined it.
DECLARATIVE EVENTS AND LOCAL STATE
A pure function is always benign in composition w.r.t. to any state the caller does not explicitly modify. A non-pure a.k.a. local state is not benign and causes non-declarative reasoning under composition.
I realize that ordering and state also occur with pure functions and the state is stored externally by the caller. The distinction is that such effects can be designed to be declarative, in the way I have defined declarative. Local state paradigm can never be designed to be declarative, because it forces unintended order.
Time-dependent behavior specified declaratively means we don't accidentally specify the (unintended) ordering (and duplication) that is irrelevant to the semantics we want to express.
So I agree that to be declarative with time-dependent behavior requires not only the elimination of local state, but also the tight coupling of intended time-dependent semantics.
Now we need to find the models for time-dependent semantics which are best fit to the applied domains. No simple task.
The challenge when designing a declarative framework is to maintain the necessary generality because software is alive and never static. New features must not require waiting on a standards committee, e.g. HTML 5.
Software development is accelerating and that technological unemployment shift is the <a href='http://www.mpettis.com/2013/02/21/a-brief-history-of-the-chinese-growth-model/#comment-21647'>fascinating underlying cause</a> of the current global sovereign debt crisis.
MODELING EVENTS MORE DECLARATIVELY
Pondering your comment about the different ways you are thinking about modeling events, e.g. button click, it occurs to me that the problems derive from the time-dependent orders implicit in any stored state.
When we are declarative, we inherently abstract away unnecessary state.
For example, instead of modeling a drag-n-drop operation as `drag on mousedown, loop on mousemove, drop on mouseup`, we can model as `while dragging then drop`. This converts the modal on drag initialization function that sets the new state to a pure function which returns the new state while dragging.
I guess I am realizing that we must define semantics which minimize state and make the unavoidable state very closing coupled to the intended semantics.
For example, if pressing two buttons together gives a different semantic than pressing one followed the other, then we need to have a semantic for modelling a two bottom press that doesn't require modeling the separate events with local stored state. David mentioned this already.
AFAIK, David's proposed point-free semantics (Arrows) for composing signals is a generalized paradigm for his engine to globally model the multi-signal stream. I still don't have a good metal model how this is beneficial as compared to for example having an API for consuming any two events as a single one over an allowed interval? What little of understand of it, feels very abstracted away from what the programmer wants to declare.
Your blog does not show me a "reply" button on your latter posts, so I can not reply under them indented properly. So I am forced to quote from your reply, so the reader can correlate which post of yours I am replying to.“Fetch five potatoes” is not a declaration. It’s a command – literally, an imperative sentence. If it was a declaration, it would be commutative with other declarations, because declarative sentences are commutative
Your reply indicates you've entirely missed my point as to why your definition is circular, because you are now employing that circular strawman.
Any language that is Turing complete will be able to create order and duplication (even where there was a intention to isolate it, e.g. Haskell with its pure function). Even any non-Turning complete language that accomplishes any real world task will express order.
Thus your definition is impossible, unless you want your language to do precisely nothing.
Whereas my definition is not only possible, it is precisely how we benefit from declarative languages.
Using a correct definition is crucial, because it will impact the design thought about declarative models.
Please allow a link to the prior discussion which you moved from these blog comments to a google doc. I already explained the logic in those comments.
 https://docs.google.com/document/d/1P6jXX0vZMqKnZHGhB2CMBsPhB3VUlPiM07lcNBE4Op4/editWhen ordering is visible but required or implicit in the syntax, you can’t tell from semantics (denotational or otherwise) whether an ordering is accidental or not
That is what I mean by "can't see".AFAICT, you’re effectively saying that a language is declarative so long as it correctly implements a denotational semantics.
No because typing can not (and should not) capture all of the semantics of a program. At the extreme such as Epigram, we lose Turing completeness. In my prior comments, I explained how your definition also similarly degenerates due to inconsistency (either you accept inconsistency or you give up generality with the extreme being a 100% declarative program does precisely only one thing-- that which was preprogrammed in its design).
And the above explains why you still don't get my point that your definition is vacuous. C.f. the Google doc of the removed comments for the logic.
Declarativity has nothing to do with the communitivity of the denotational semantics. It has to do with balancing accidental inconsistency with generality. One of the tools for achieving that is to not allow ordering dependencies in the low-level semantics that are not relevant to the denotational semantics.
Last edited by Shelby on Mon Mar 11, 2013 10:30 am; edited 6 times in total
JustSaying (Shelby) wrote:
JustSaying (Shelby) wrote:
@esrNevertheless, it is true that Japan seems to have far fewer hackers than it ought to, given the average levels of education and wealth and the degree of Internet penetration and the fact that the country is otherwise very positive about technology. And it’s not hard to connect this to identifiable features of Japanese culture.
Internet Explorer browser share by country seems to correlate with that "respect for authority" ("save face") conformance of northeast Asian culture.
71% South Korea
39% Hong Kong
20% South America
12% Viet Nam
Closer to here (i.e. lower) is more use of open source.
* I code on the top of tallest mountain here.
Permissions in this forum:You cannot reply to topics in this forum