[
  {
    "path": ".editorconfig",
    "content": "root = true\n\n[*]\ncharset = utf-8\nindent_style = tab\nend_of_line = lf\ninsert_final_newline = true\ntrim_trailing_whitespace = true\n\n[*.{md,yml,yaml}]\nindent_style = space\nindent_size = 2\n"
  },
  {
    "path": ".github/FUNDING.yml",
    "content": "github: [blainehansen]\n"
  },
  {
    "path": ".gitignore",
    "content": "\\#*.v\\#\n*.glob\n*.vo\n*.vok\n*.vos\n*.aux\n*.d\n\n*.cmi\n*.cmo\n*.out\n_build\n\nMakefile\nMakefile.conf\n*.cache\ntheorems/*.ml\ntheorems/*.mli\n\n*.local.*\n*.bc\n\n# Added by cargo\n\n/target\n"
  },
  {
    "path": "CODE_OF_CONDUCT.md",
    "content": "# Code of Conduct\n\nWe're using the exact same Code of Conduct as the Rust project, which [can be found online](https://www.rust-lang.org/conduct.html).\n"
  },
  {
    "path": "CONTRIBUTING.md",
    "content": "Hey there!\n\nRight now this project is optimized to be easy for me (Blaine Hansen) to work in. This means it might not be easy for anyone to jump right in, and syntax or workflow may disregard certain community standards if I find them inconvenient. I'm not really concerned with the different standards of different language communities, and if I feel a language community has made a standard choice that makes code more difficult to work with, I will completely ignore it.\n\nAlthough I will gladly accept pull requests that add conveniences for other setups, **I will deny any that disrupt my workflow**. I wish I had time to support other setups, but unfortunately working in Coq is very difficult and nit-picky, so the usual developer niceties such as using docker for local development aren't really practical.\n\nHere are the main things I can think of:\n\n- I run Ubuntu, so I have arranged all the scripts and build files to assume that. If you're interested in running on other systems, I'm afraid I have to leave you to your own devices. If a pull request makes a change that breaks the build on my system, I won't accept it. **I will gladly accept pull requests that make it possible to build everywhere!** However an important constraint is that [Coq interactive mode](https://packagecontrol.io/packages/Coq) must continue to work for me. If you can guide me toward a setup that allows other systems to run the build while working with Coq interactive mode, I'm happy to hear it.\n- [I only ever use tabs over spaces for indentation, always.](https://adamtuttle.codes/blog/2021/tabs-vs-spaces-its-an-accessibility-issue/) I will only use spaces if some irreplaceable piece of the system will literally not work if I don't (`yml` is an example). I'm more likely to simply [not use](https://github.com/avh4/elm-format/issues/158) a language if it requires spaces. You can see this choice being made in all the `dune` files throughout the project. The OCaml ecosystem seems to think that a *single* space is easy enough to read, whereas I find it extremely difficult to read (which highlights the real reason tabs are better, everyone can configure their own tab display width).\n- If some syntactic structure is \"list-like\" and supports one item per line, I will write it in a way that allows quickly adding and reordering lines without having to change the location of ending braces/parens. You can also see this in the `dune` files, where instead of using the lisp standard of placing closing parens on the same line as the last item, I place them on a new deindented line.\n\nThese probably seem trite and nit-picky, and maybe they are. I just don't want to fight with this code more than is necessary.\n\nThank you for your understanding!\n"
  },
  {
    "path": "Cargo.toml",
    "content": "[package]\nname = \"magmide\"\nversion = \"0.1.0\"\nedition = \"2021\"\n\n# [lib]\n# name = \"magmide\"\n# path = \"src/lib.rs\"\n# crate-type = [\"staticlib\", \"cdylib\"]\n\n# [build]\n# # https://doc.rust-lang.org/cargo/reference/config.html\n# rustflags = [\"-l\", \"LLVM-13\", \"-C\", \"link-args=-Wl,-undefined,dynamic_lookup\"]\n\n# # See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html\n[dependencies]\nnom = \"7\"\n# anyhow = \"1.0.57\"\n"
  },
  {
    "path": "README.future.md",
    "content": "# Magmide\n\n> Correct, Fast, Productive: pick three.\n\nMagmide is the first language built from the ground up to allow software engineers to productively write extremely high performance software for any computational environment, logically prove the software correct, and run/compile that code all within the same tool.\n\nThe goal of the project is to spread the so-far purely academic knowledge of software verification and formal logic to a broad audience. It should be normal for engineers to create programs that are truly correct, safe, secure, robust, and performant.\n\nThis file is a \"by example\" style reference for the features and interface of Magmide. It doesn't try to explain any of the underlying concepts, just document decisions, so you might want to read one of these other resources:\n\n- If you want to be convinced the goal of this project is both possible and necessary, please read [What is Magmide and Why is it Important?]()\n- If you want to learn about software verification and formal logic using Magmide, please read [Intro to Verification and Logic with Magmide]().\n- If you want to contribute and need the nitty-gritty technical details and current roadmap, please read [The Technical Design of Magmide]().\n\n## Install and Use\n\nMagmide is heavily inspired by Rust and its commitment to ergonomic tooling and straightforward documentation.\n\n```bash\n# install magmide and its tools\ncurl --proto '=https' --tlsv1.2 -sSf https://sh.magmide.dev | sh\n\n# create a new project\nmagmide new hello-world\ncd hello-world\n\nmagmide check <entry>\nmagmide run\nmagmide build\n```\n\n## Syntax\n\nHere's what we can do\n\ncalling is just placing things next to each other with no commas. an *explicit* comma-separated list is always a tuple, which is why function arguments are always specified that way\npiping style calling uses `>functionname`. it seems that because of precedence and indentation rules which expressions are function names is always inferable?\nthis works inline too, so `data>functionname` or `data >infix something`\n`>> arg arg2; expr` defines an anonymous function and immediately calls it in piping style. `>>;` is then the equivalent of your old `do` idea\n`--` is the \"bumper\" for an indented expression\nthe sections of keywords are delimited by semicolons\nnested function calls are just indented since function calling is\n`/` is the *keyword continuation operator*, so all keywords, even possibly multi-line ones, can be defined metaprogramatically within the language\n\n```\nif yo; --\n  function_name arg arg\n  >whatevs\n  >another thing\n  >> something; yo different something\n  >> hm; abb >hm diff\n/elif yoyo; whatevs\n/else; dude\n\nif yo; yoyo /else; dude\n\nlet thingy = if some >whatevs hmm; dude /else; yo\n```\n\npiping custom keywords can be done with a leading `;`? and standalone statement style ones are something else like `$`?\ncustom keywords are called with a leading `;`? so something like `;route_get yoyo something; whatevs /err; dude`\n\ncalling macros/known functions is indicated with something like a `~` or just the backtick thing? which means it can be done\n\ninclude the \"backpassing\" idea? or simplify it by somehow creating an \"implicit callback defining pipe operator?\" such as `>>>`?\n\n\n\n\n\n\nMagmide is whitespace/indentation sensitive.\nAnywhere a `;` can be used an opening indent can be used *additionally*.\nAnywhere a `,` can be used a newline can be used *instead*.\nThe `:` operator is always used in some way to indicate type-like assertions.\nPrecedence is decided using nesting with parentheses or indentation and never operator power.\n\"Wrapping\" delimiters are avoided.\n\"Pipeability\" is strongly valued.\nOperators are rarely used to represent actions that could be defined within the language, and instead prioritize adding new capabilities.\n\n```\n// defining computational types\ndata Unit\ndata Tuple;\n\n\ndata Macro (S=undefined);\n  | Block; BlockMacroFn\n  | Function; FunctionMacroFn\n  | Decorator; DecoratorMacroFn\n  | Import; ImportMacroFn(S)\n\n\nalias SourceChannel S; Dict<S> -> void\n\nfn non_existent_err macroName: str; str, str;\n  return \"Macro non-existent\", \"The macro \"${macroName}\" doesn't exist.\n\nfn incorrect_type_err\n  macroName: str\n  macroType: str\n  expectedType: str\n;\n  str\n  str\n;\n  return \"Macro type mismatch\", \"The macro \"${macroName}\" is a ${macroType} type, but here it's being used as a ${expectedType} type.\"\n\ndata CompileContext S;\n  macros: Dict(Macro(S))\n  fileContext: FileContext\n  sourceChannel: SourceChannel(S)\n  handleScript: { path: str source: str } -> void\n  readFile: str -> str | undefined\n  joinPath: ..str -> str\n  subsume: @T -> SpanResult<T> -> Result<T, void>\n  Err: (ts.TextRange, str, str) -> Result<any, void>\n  macroCtx: MacroContext\n\ndata MacroContext;\n  Ok: @T, (T, SpanWarning[]?) -> SpanResult<T>\n  TsNodeErr: (ts.TextRange, str, ..str) -> SpanResult<any>\n  Err: (fileName: str, title: str, ..str) -> SpanResult<any>\n  tsNodeWarn: (node: ts.TextRange, str, ..str[]) -> void\n  warn: (str, str, ..str[]) -> void\n  subsume: @T, SpanResult T -> Result T, void\n\n\ndata u8; bitarray(8)\n\nideal Day;\n  | monday | tuesday | wednesday | thursday\n  | friday | saturday | sunday\n\n  use Day.*\n\n  rec next_weekday day: Day; match day;\n    monday; tuesday, tuesday; wednesday, wednesday; thursday, thursday; friday\n    friday; monday, saturday; monday, sunday; monday\n\nideal Bool;\n  | true\n  | false\n\n  use Bool.*\n\n  rec negate b: Bool :: bool;\n    match b;\n      true; false\n      false; true\n\n  rec and b1: bool, b2: bool :: bool;\n    match b1;\n      true; b2\n      false; false\n\n  rec or b1: bool, b2: bool :: bool;\n    match b1;\n      true; true\n      false; b2\n\n  impl core.testable;\n    rec test b: Bool :: bool;\n      match b; true; testable.true, false; testable.false\n\n  rec negate_using_test b: Bool :: bool;\n    test b;\n      false\n      true\n\n\nideal IndexList<A: ideal> :: nat;\n  | Nil :: IndexList(0)\n  | Cons :: @n A IndexList(n) -> IndexList(n;next)\n\n  rec append n1, ls1: IndexList(n1), n2, ls2: IndexList(n2) :: IndexList(n1 ;add n2);\n    match ls1;\n      Nil; ls2\n      Cons(_, x, ls1'); Cons(x, append(ls1', ls2))\n\nprop even :: nat;\n  | zero: even(0)\n  | add_two: @n, even(n) -> even(n;next;next)\n\n  use even.*\n  thm four_is: even(4); prf;\n    + add_two; + add_two; + zero\n\n  thm four_is__next: even(4); prf;\n    + (add_two 2 (add_two 0 zero))\n\n  thm plus_four: @n, even n -> even (4 ;add n); prf;\n    => n; >>; => Hn;\n    + add_two; + add_two; + Hn\n\n  thm inversion:\n    @n: nat, even n -> (n = 0) ;or (exists m; n = m;next;next ;and even m)\n  ; prf;\n    => n [| n' E']\n      left; _\n      --\n        right; exists n'; split\n        _; + E'\n\n```\n\n\n\n## Metaprogramming\n\n## Interactive Tactic Mode\n\n\n\n## Module system\n\n```\n// use a module whose location has been specified in the manifest\n// the manifest is essentially sugar for a handful of macros\nuse lang{logic, compute}\n\n// the libraries 'lang', 'core', and 'std' are spoken for. perhaps though we can allow people to specify external packages with these names, we'll just give a warning that they're shadowing builtin modules\n\n// use a local module\n// files/directories/internal modules are all accessed with .\n// `__mod.mg` can act as an \"module entry\" for a directory, you can't shadow child files or directories\n// the `mod` keyword can create modules inside a file, you can't shadow sibling files or directories\n// `_file.mg` means that module is private, but since this is a verified language this is just a hint to not show the module in tooling, any true invariants should be fully specified with `&`\nuse .local.nested{thing, further{nested.more, stuff}}\n\n// can do indented instead\nuse .local.nested\n  thing\n  further{nested.more, stuff}\n  whatever\n    stuff.thingy\n\n// goes up to the project root\nuse ~local.whatever\n\n// the module system allows full qualification of libraries, even to git repositories\n// the format 'name/something' defaults to namespaced libraries on the main package manager\n// a full git url obviously refers to that repo\nuse person/lib.whatever\n\n// the above could be equivalent to:\nlet person_lib = lang.pull_lib$(git: \"https://github.com/person/lib\")\nuse person_lib.whatever\n```\n\n\n```\nuse lang.{ logic, compute }\n\n// all inductive definitions use the `ind` keyword\n// the different kinds of types are included by default and automatically desugared to be the more \"pure\" versions of themselves\n\n// a union-like inductive\nind Day\n  | monday | tuesday | wednesday | thursday\n  | friday | saturday | sunday\n\n// a record-like inductive\nind Date\n  year: logic.Nat\n  month: logic.Nat & between(1, 12)\n  day: logic.Nat\n\n// a tuple-like inductive\nind IpAddress; logic.Byte, logic.Byte, logic.Byte, logic.Byte\n\n// the same as above but with a helper macro\nind IpAddress; logic.tuple_repeat(logic.Byte, 4)\n\n// a unit-like inductive\nind Unit\n\nrec next_weekday day\n  // bring all the constructors of Day into scope\n  use Day.*\n  match day\n    monday; tuesday, tuesday; wednesday, wednesday; thursday, thursday; friday\n    friday; monday, saturday; monday, sunday; monday\n\n\nlet next_weekday_computable = compute.logic_computable(next_weekday)\nlet DayComputable = compute.type(next_weekday_computable).args[0].type\n\ndbg next_weekday_computable(DayComputable.monday)\n// outputs \"Day.tuesday\"\n\n\n// what if we were define the above types and function in the computable language?\n// it's as simple as changing \"ind\" to \"type\", \"rec\" to \"fn\", and ensuring all types are computable\n// all of these \"creation\" keywords are ultimately just some kind of sugar for a \"let\"\n\ntype Day\n  | monday | tuesday | wednesday | thursday\n  | friday | saturday | sunday\n\ntype Date\n  year: u16\n  month: u8 & between(1, 12)\n  day: u8\n\ntype Name; first: str, last: str\n\ntype Pair U, T; U, T\n\ntype IpAddress; u8, u8, u8, u8\n\ntype IpAddress; compute.tuple_repeat(u8, 4)\n\ntype Unit\n\nfn next_weekday day\n  use Day.*\n  // a match implicitly takes discriminee, arms, proof of completeness\n  match day\n    monday; tuesday, tuesday; wednesday, wednesday; thursday, thursday; friday\n    friday; monday, saturday; monday, sunday; monday\n\n// now no need to convert it first\ndbg next_weekday(Day.monday)\n// outputs \"Day.tuesday\"\n```\n\nIn general, `;` is an inline delimiter between tuples, and `,` is an inline delimiter between tuple elements. Since basically every positional item in a programming language is a tuple (or the tuple equivalent record), the alteration of these two can delimit everything. Note these are only *inline* delimiters, indents are the equivalent of `;` and newlines are the equivalent of `,`.\nWhy `;`? Because `:` is for type specification.\n\n`==` is for equality, and maps to the two different kinds of equality if it's used in a logical or computational context.\n\n\n### trait system in host magmide\ndon't need an orphan rule, just need explicit impl import and usage. the default impl is the bare one defined alongside the type, and either you always have to manually include/specify a different impl or its a semver violation to add a bare impl alongside a type that previously didn't have one\n\n\n\n### example: converting a \"logical\" inductive type into an actual computable type\n\n### example: adding an option to a computable discriminated union\n\n### example: proving termination of a\n\n## The embedded `core` language\n\n\n## Testing\n\ntalk about quickcheck and working up to a proof\n\n## Metaprogramming\n\nKnown strings given to a function\nKeyword macros\n\n"
  },
  {
    "path": "README.md",
    "content": "# :construction: Magmide is purely a research project at this point :construction:\n\nThis repo is still very early and rough, it's mostly just notes, speculative writing, and exploratory theorem proving. Most of the files in this repo are just \"mad scribblings\" that I haven't refined enough to actually stand by!\n\nIf you prefer video, this presentation talks about the core ideas that make formal verification and Magmide possible, and the design goals and intentions of the project:\n\n[![magmide talk](https://img.youtube.com/vi/Lf7ML_ErWvQ/0.jpg)](https://www.youtube.com/watch?v=Lf7ML_ErWvQ)\n\nIn this readme I give a broad overview and answer a few possible questions. Enjoy!\n\n---\n\nThe goal of this project is to: **create a programming language capable of making formal verification and provably correct software practical and mainstream**. The language and its surrounding education/tooling ecosystem should provide a foundation strong enough to create verified software for any system or environment.\n\nSoftware is an increasingly critical component of our society, underpinning almost everything we do. It's also extremely vulnerable and unreliable. Software vulnerabilities and errors have likely caused humanity [trillions of dollars](https://www.it-cisq.org/pdf/CPSQ-2020-report.pdf) in damage, [social harm](https://findstack.com/hacking-statistics/), waste, and [lost growth opportunity](https://raygun.com/blog/cost-of-software-errors/) in the digital age (it seems clear [Tony Hoare's estimate](https://en.wikipedia.org/wiki/Tony_Hoare#Apologies_and_retractions) is way too conservative, especially if you include more than `null` errors).\n\nWhat would it look like if it was both possible and tractable for working software engineers to build and deploy software that was *provably correct*? Using [proof assistant languages](https://en.wikipedia.org/wiki/Proof_assistant) such as [Coq](https://en.wikipedia.org/wiki/Coq) it's possible to define logical assertions as code, and then write proofs of those assertions that can be automatically checked for consistency and correctness. Systems like this are extremely powerful, but have only been suited for niche academic applications until the fairly recent invention of [separation logic](http://www0.cs.ucl.ac.uk/staff/p.ohearn/papers/Marktoberdorf11LectureNotes.pdf).\n\nSeparation logic isn't a tool, but a paradigm for making logical assertions about mutable and destructible state. The Rust ownership system was directly inspired by separation logic, which shows us that it really can be used to unlock revolutionary levels of productivity and excitement. Separation logic makes it possible to verify things about practical imperative code, rather than simply outlawing mutation and side effects as is done in functional languages.\n\nHowever Rust only exposes a simplified subset of separation logic, rather than exposing the full power of the paradigm. [The Iris separation logic](https://people.mpi-sws.org/~dreyer/papers/iris-ground-up/paper.pdf) was recently created by a team of academics to fully verify the correctness of the Rust type system and several core implementations that use `unsafe`. Iris is a fully powered separation logic, making it uniquely capable of verifying the kind of complex, concurrent, arbitrarily flexible assertions that could be implied by practical Rust code, even those that use `unsafe`. Iris could do the same for any other practical and realistic language.\n\nIsn't that amazing?!? A system that can prove completely and eternally that a use of `unsafe` isn't actually unsafe??!! You'd think the entire Rust and systems programming community would be over the moon!\n\nBut as is common with academic projects, it's only being used to write papers rather than build real software systems. All the existing uses of Iris perform the proofs \"on the side\", analyzing [manual transcriptions of the source code as Coq notation](https://coq.inria.fr/refman/user-extensions/syntax-extensions.html) rather than directly reading the original source. And although the papers are more approachable than most academic papers, they're still academic papers, and so basically no working engineers have even heard of any of this.\n\nThis is why I'm building Magmide, which is intended to be to Coq what Rust has been to C. There are quite a few proof languages capable of proving logical assertions in code, but none exist that are specifically designed to be used by working engineers to build real imperative programs. None have placed a full separation logic, particularly one as powerful as Iris, at the heart of their design, but instead are overly dogmatic about the pure functional paradigm. And all existing proof languages are hopelessly mired in the obtuse and unapproachable fog of [research debt](https://distill.pub/2017/research-debt/) created by the culture of academia. Even if formal verification is already capable of producing [provably safe and secure code](https://www.quantamagazine.org/formal-verification-creates-hacker-proof-code-20160920/), it isn't good enough if only professors have the time to gain the necessary expertise. We need to pull all this amazing knowledge out of the ivory tower and finally put it to work to make computing truly safe and robust.\n\nI strongly believe a world with mainstream formal verification would not only see a significant improvement in *magnitude* of social good produced by software, but a significant improvement in *kind* of social good. In the same way that Rust gave engineers much more capability to safely compose pieces of software therefore enabling them to confidently build much more ambitious systems, a language that gives them the ability to automatically check arbitrary conditions will make safe composition and ambitious design arbitrarily easier to do correctly.\n\nWhat kinds of ambitious software projects have been conceived but not pursued because getting them working would simply be too difficult? With machine checkable proofs in many more hands could we finally build *truly secure* operating systems, trustless networks, or electronic voting methods? How many people could be making previously unimagined contributions to computer science, mathematics, and even other logical fields such as economics and philosophy if only they had approachable tools to do so? I speculate about some possibilities at the end of this readme.\n\nTo achieve this goal I've chosen an architecture I call the \"split Logic/Host\" architecture, where the two domains of software thinking are separated into two languages:\n\n- Logic, the dependently typed lambda calculus of constructions. This is where \"imaginary\" types are defined and proofs are conducted.\n- Host, the imperative language that actually runs on real machines.\n\nThese two components must have a symbiotic relationship with one another: Logic is used to define and make assertions about Host, and Host computationally represents and implements both Logic and Host itself.\n\n```\n         represents and\n           implements\n  +------------+------------+\n  |            |            |\n  |            |            |\n  v            |            |\nLogic          +---------> Host\n  |                         ^\n  |                         |\n  |                         |\n  +-------------------------+\n        logically defines\n          and verifies\n```\n\nThe easiest way to understand this is to think of Logic as the type system of Host. Logic is \"imaginary\" and only exists at compile time, and constrains/defines the behavior of Host. Logic just happens to itself be a dependently typed functional programming language! This design takes the concept of [self-hosting](https://en.wikipedia.org/wiki/Self-hosting_(compilers)) to its logical extreme.\n\nWe intend to achieve this goal by [building Magmide as the Logic portion with Rust as Host, then defining the semantics of Rust *within* Magmide, and finally building a \"reflective proof rule\" into Magmide to allow it to use verified Rust code during proof checking.](https://github.com/magmide/magmide/blob/main/posts/design-of-magmide.md) This seems the most realistic way to bootstrap the project!\n\nI'm convinced this general architecture is the only one that can achieve Magmide's extremely ambitious goal. It feels like an optimal point in the design space, since I can't imagine another architecture that would allow all of the language components (proof checker, code compiler, target code being compiled) the possibility to be both bare metal and fully verified.\n\nBut it's not good enough for the architecture to *allow* a great language design. Everything else about the design has to be chosen correctly as well. I claim that in order for the language to achieve its goal, it has to meet all these descriptions:\n\n## Capable of arbitrary logic\n\nIn order to really deliver the kind of truly transformative correctness guarantees that will inspire working engineers to learn and use a difficult new language, it doesn't make sense to stop short and only give them an \"easy mode\" verification tool. It should be possible to formalize and attempt to prove any proposition humanity is capable of representing logically, not only those that a fully automated tool like an [SMT solver](https://liquid.kosmikus.org/01-intro.html) can figure out. **A language with full logical expressiveness and manual proofs can still use convenient automation as well**, but the opposite isn't true.\n\nTo meet this description, the language will be fully dependently typed and use the [Calculus of Constructions](https://en.wikipedia.org/wiki/Calculus_of_constructions) much like [Coq](https://en.wikipedia.org/wiki/Coq). I find [Adam Chlipala's \"Why Coq?\"](http://adam.chlipala.net/cpdt/html/Cpdt.Intro.html) arguments convincing in regard to this choice. Coq will also be used to bootstrap the first version of the compiler, allowing it to be self-hosting and even self-verifying using a minimally small trusted theory base. Read more about the design and bootstrapping plan in [`posts/design-of-magmide.md`](./posts/design-of-magmide.md). The [metacoq](https://github.com/MetaCoq/metacoq) and [\"Coq Coq Correct!\"](https://metacoq.github.io/coqcoqcorrect) projects have already done the work of formalizing and verifying Coq using Coq, so they will be very helpful.\n\nIt's absolutely possible for mainstream engineers to learn and use these powerful logical concepts. The core ideas of formal verification (dependent types, proof objects, higher order logic, separation logic) aren't actually that complicated. They just haven't ever been properly explained because of [research debt](https://distill.pub/2017/research-debt/), and they weren't even all that practical before separation logic and Iris. I've been working on better explanations in the (extremely rough and early) [`posts/intro-verification-logic-in-magmide.md`](./posts/intro-verification-logic-in-magmide.md) and [`posts/coq-for-engineers.md`](./posts/coq-for-engineers.md).\n\n## Capable of bare metal performance\n\nSoftware needs to perform well! Not all software has the same requirements, but often performance is intrinsically tied to correct execution. Very often the software that most importantly needs to be correct also most importantly needs to perform well. **If the language is capable of truly bare metal performance, it can still choose to create easy abstractions that sacrifice performance where that makes sense.**\n\nTo meet this description Magmide will be built in and deeply integrated with Rust. Excitingly because of the inherent power and flexibility of a proof assistant this integration with Rust doesn't have to be permanent, and we could build other languages to act as Host as long as we can specify their semantics and make them interoperable!\n\nBecause of separation logic and Iris, it is finally possible to verify code as low-level as Rust and more!\n\n## Gradually verifiable\n\nJust because it's *possible* to fully verify all code, doesn't mean it should be *required*. It simply isn't practical to try to completely rewrite a legacy system in order to verify it. **We must be able to write code without needing to prove it's perfectly correct**, otherwise iteration and incremental adoption are impossible. Existing languages with goals of increased rigor such as Rust and Typescript strategically use concessions in the language such as `unsafe` and `any` to allow more rigorous code to coexist with legacy code as it's incrementally replaced. The only problem is that these concessions introduce genuine soundness gaps into the language, and it's often difficult or impossible to really understand how exposed your program is to these safety gaps.\n\nWe can get both practical incremental adoption and complete understanding of the current safety of our program by leveraging work done in the [Iron obligation management logic](https://iris-project.org/pdfs/2019-popl-iron-final.pdf) built using Iris. We can use a concept of trackable effects to allow some safety conditions to be *optional*.\n\nTrackable effects will work by requiring a piece of some \"correctness token\" to be forever given up in order to perform a dangerous operation without justifying its safety with a proof. This would infect the violating code block with an effect type that will bubble up through any parent blocks. Defining effects in this way makes them completely composable *resources* rather than *wrappers*, meaning that they're more flexible and powerful than existing effect systems. Systems like algebraic effects or effect monads could be implemented using this resource paradigm, but the opposite isn't true.\n\nIf the trackable effect system is defined in a sufficiently generic way then custom trackable effects could be created, allowing different projects to introduce new kinds of safety and correctness tracking, such as ensuring asynchronous code doesn't block the executor, or a web app doesn't render raw untrusted input, or a server doesn't leak secrets.\n\nEven if a project chooses to ignore some effects, they'll always know those effects are there, which means other possible users of the project will know as well. Project teams could choose to fail compilation if their program isn't memory safe or could panic, while others could tolerate some possible effects or write proofs to assert they only happen in certain well-defined circumstances. It would even be possible to create code that provably sandboxes an effect by ensuring it can't be detected at any higher level if contained within the sandbox. With all these systems in place, we can finally have a genuinely secure software ecosystem!\n\n## Fully reusable\n\nWe can't write all software in assembly language! Including first-class support for powerful metaprogramming, alongside a [query-based compiler](https://ollef.github.io/blog/posts/query-based-compilers.html), will allow users of this language to build verified abstractions that \"combine upward\" into higher levels, while still allowing the possibility for those higher levels to \"drop down\" back into the lower levels. Being a proof assistant, these escape hatches don't have to be unsafe, as higher level code can provide proofs to the lower level to justify its actions.\n\nThis ability to create fully verifiable higher level abstractions means we can create a \"verification pyramid\", with excruciatingly verified software forming a foundation for a spectrum of software that decreases in importance and rigor. **Not all software has the same constraints, and it would be dumb to to verify a recipe app as rigorously as a cryptography function.** But even a recipe app would benefit from its foundations removing the need to worry about whole classes of safety and soundness conditions. And wouldn't it be great to prove your app will never leak memory or throw exceptions or enter an infinite loop/recursion?\n\nMagmide *itself* doesn't have to achieve mainstream success to massively improve the quality of all downstream software, but merely some sub-language. Many engineers have never heard of LLVM, but they still implicitly rely on it every day. Magmide would seek to do the same. We don't have to make formal verification fully mainstream, we just have to make it available for the handful of people willing to do the work. If a full theorem prover is sitting right below the high-level language you're currently working in, you don't have to bother with it most of the time, but you still have the option to do so when it makes sense.\n\nThe metaprogramming can of course also be used directly in the dependently typed language, allowing compile-time manipulation of proofs, functions, and data. Verified proof tactics, macros, and higher-level embedded programming languages are all possible. This is the layer where absolutely essential proof automation tactics similar to Coq's `auto` or [Adam Chlipala's `crush`](http://adam.chlipala.net/cpdt/html/Cpdt.Intro.html), or fast counter-example searchers such as `quickcheck`, or [computational reflection systems](./posts/design-of-magmide.md#heavy-use-of-computational-reflection-to-improve-proof-performance) would be implemented.\n\nImportantly, the language will be self-hosting, so metaprogramming functions will benefit from the same bare metal performance and full verifiability.\n\nYou can find rough notes about the current design thinking for the metaprogramming interface in [`posts/design-of-magmide.md`](./posts/design-of-magmide.md).\n\n## Practical and ergonomic\n\nMy experience using languages like Coq has been extremely painful, and the interface is \"more knife than handle\". I've been astounded how willing academics seem to be to use extremely clunky workflows and syntaxes just to avoid having to build better tools.\n\nTo meet this description, this project will learn heavily from `cargo` and other excellent projects. **It should be possible to verify, interactively prove, and query Magmide code with a single tool.** The split Logic/Host architecture will likely make it easier to understand and use Magmide.\n\nIt will also fully embrace ergonomic type inference, and use techniques such as those from [\"Flux: Liquid Types for Rust\"](https://arxiv.org/abs/2207.04034) to allow even many *proof* conditions to be inferred.\n\n## Taught effectively\n\n**Working engineers are resource constrained and don't have years of free time to wade through arcane and disconnected academic papers.** Academics aren't incentivized to properly explain and expose their amazing work, and a massive amount of [research debt](https://distill.pub/2017/research-debt/) has accrued in many fields, including formal verification.\n\nTo meet this description, this project will enshrine the following values in regard to teaching materials:\n\n- Speak to a person who wants to get something done and not a review committee evaluating academic merit.\n- Put concrete examples front and center.\n- Point the audience toward truly necessary prerequisites rather than assuming shared knowledge.\n- Prefer graspable human words to represent ideas, never use opaque and unsearchable non-ascii symbols, and only use symbolic notations when it's both truly useful and properly explained.\n- Prioritize the hard work of finding clear and distilled explanations.\n\n---\n\nRead [`posts/design-of-magmide.md`](./posts/design-of-magmide.md) or [`posts/comparisons-with-other-projects.md`](./posts/comparisons-with-other-projects.md) to more deeply understand the intended design and how it's different than other projects.\n\nBuilding such a language is a massively ambitious goal. It might even be too ambitious! But we have to also consider the opposite: perhaps previous projects haven't been ambitious enough, and that's why formal verification is still niche! Software has been broken for too long, and we won't have truly solved the problem until it's at least *possible* for all software to be verified.\n\n<!--\n# Project values\n\n[The long term path of a project is determined by its values](TODO), so we should define ours.\n\n- Fidelity. This value combines performance and full verifiability\n\nFor a language to both perform as well as possible and be verifiable as deeply as possible, we must make the language as faithful and accurate a model of the real underlying computation as possible. We can still use our very low level and granular models to build up verified abstractions for higher levels of reasoning, but the whole thing must have an accurate foundation that ties directly into the bare metal. Fidelity can be used to get us several other desirable things, such as performance and increased safety.\n- Practicality. If we want to make the software of our world safer and more robust, then we have to build a language that can actually be used to achieve useful work in real applications. This means the language should allow compatibility with existing systems and incremental adoption.\n- Approachability. I genuinely believe the culture and working patterns of academia aren't just inefficient in regards to producing usable knowledge for society, but are toxic and exclusionary. [Research debt]()\n\nTo create a language that can possibly have all the above design qualities, I claim we have to max out\n-->\n\n# FAQ\n\n## Is it technically possible to build a language like this?\n\nYes! None of the technical details of this idea are untested or novel. Dependently typed proof languages, higher-order separation logic, query-based compilers, introspective metaprogramming, and abstract assembly languages are all ideas that have been proven in other contexts. Magmide would merely attempt to combine them into one unified and practical package.\n\n## Is this language trying to replace Rust?\n\nNo! My perfect outcome of this project would be for it to sit *underneath* Rust, acting as a new verified toolchain that Rust could \"drop into\". The concepts and api of Rust are awesome and widely loved, so Magmide would just try to give it a more solid foundation.\n\n## If this is such a good idea why hasn't it happened yet?\n\nMostly because this idea exists in an \"incentive no man's land\".\n\nAcademics aren't incentivized to create something like this, because doing so is just \"applied\" research which tends not to be as prestigious. You don't get to write many groundbreaking papers by taking a bunch of existing ideas and putting them together nicely.\n\nSoftware engineers aren't incentivized to create something like this, because a programming language is a pure public good and there aren't any truly viable business models that can support it while still remaining open. Even amazing public good ideas like the [interplanetary filesystem](https://en.wikipedia.org/wiki/InterPlanetary_File_System) can be productized by applying the protocol to markets of networked computers, but a programming language can't really pull off that kind of maneuver.\n\nAlthough the software startup ecosystem does routinely build pure public goods such as databases and web frameworks, those projects tend to have an obvious and relatively short path to being useful in revenue-generating SaaS companies. The problems they solve are clear and visible enough that well-funded engineers can both recognize them and justify the time to fix them. In contrast the path to usefulness for a project like Magmide is absolutely not short, and despite promising immense benefits to both our industry and society as a whole, most engineers capable of building it can't clearly see those benefits behind the impenetrable fog of research debt.\n\n<!-- The problem of not properly funding pure public goods is much bigger than just this project. We do a bad job of this in every industry and so our society has to tolerate a lot of missed opportunity and negative externalities. The costs of broken software are more often borne by society than the companies at fault since insurance and limited-liability structures and PR shenanigans and expensive lawyers can all help a company wriggle out of fully internalizing the cost of their mistakes. Profit-motivated actors are extremely short-sighted and don't have to care if they leave society better off, they just have to get marketshare. -->\n\nWe only got Rust because Mozilla has been investing in dedicated research for a long time, and it still doesn't seem to have really financially paid off for them in the way you might hope.\n\n## Will working engineers actually use it?\n\nMaybe! We can't force people or guarantee it will be successful, but we can learn a lot from how Rust has been able to successfully teach quite complex ideas to an huge and excited audience. I think Rust has succeeded by:\n\n- *Making big promises* in terms of how performant/robust/safe the final code can be.\n- *Delivering on those promises* by building something awesome. I hope that since the entire project will have verification in mind from the start it will be easier to ship something excellent and robust with less churn than usual.\n- *Respecting people's time* by making the teaching materials clear and distilled and the tooling simple and ergonomic.\n\nAll of those things are easier said than done! Fully achieving those goals will require work from a huge community of contributors.\n\n## Won't writing verified software be way more expensive? Do you actually think this is worth it?\n\n**Emphatically yes it is worth it.** As alluded to earlier, broken software is a massive drain on our society. Even if it were much more expensive to write verified software, it would still be worth it. Rust has already taught us that it's almost always worth it to [have the hangover first](https://www.youtube.com/watch?v=ylOpCXI2EMM&t=565s&ab_channel=Rust) rather than wastefully churn on a problem after you thought you could move on.\n\nVerification is obviously very difficult. Although I have some modest theories about ways to speed up/improve automatic theorem proving, and how to teach verification concepts in a more intuitive way that can thereby involve a larger body of engineers, we still can't avoid the fact that refining our abstractions and proving theorems is hard and will remain so.\n\nBut we don't have to make verification completely easy and approachable to still get massive improvements. We only have to make proof labor more *available* and *reusable*. Since Magmide will be inherently metaprogrammable and integrate programming and proving, developments in one project can quickly disseminate through the entire language community. Research would be much less likely to remain trapped in the ivory tower, and could be usefully deployed in real software much more quickly.\n\nAnd of course, a big goal of the project is to make verification less expensive! Tooling, better education, better algorithms and abstractions can all decrease verification burden. If the project ever reaches maturity these kinds of improvements will likely be most of the continued effort for a long time.\n\nBesides, many projects already write [absolutely gobs of unit tests](https://softwareengineering.stackexchange.com/questions/156883/what-is-a-normal-functional-lines-of-code-to-test-lines-of-code-ratio), and a proof is literally *infinitely* better than a unit test. At this point I'm actually hopeful that proofs will *decrease* the cost of writing software. We'll see.\n\n## Is it actually useful to prove code meets some specification if we still have to trust the specification?\n\nIn a way yes this is true: when we prove an implementation meets some specification we're mostly just shifting uncertainty/trust from the implementation to the specification. This is part of why it's impossible for our systems to ever be completely perfect (whatever \"perfect\" means).\n\nHowever I assert that this shifting of trust from code to specifications (or put another way, from trusted code to trusted theory) is worth the effort and a huge improvement over the status quo for these reasons:\n\n- Specifications can refer to each other and be built upon, thereby revealing inconsistent assumptions and shaking out errors. Every time an incorrect specification in any way interfaces with a correct one then the incompatibility between them will be revealed at compile time. It's likely you've already experienced exactly this dynamic when you incorrectly define a *type* (type systems are just very simple proof systems!). If you mistakenly define a type field as an unsigned integer when it needs to be a signed integer, when you try to use the incorrect type in other code that expects a signed integer your mistake will be revealed. This won't always happen, but with deeper proof systems it has the opportunity to happen even more often than it happens in type systems.\n- Specifications can be much smaller and terser than implementations, and therefore easier to audit. When we audit a specification we only have to audit the type signatures of our theorems and functions, rather than all the code inside them. Implementations have to worry about performance and many internal details that don't need to be revealed, whereas specifications only have to make assertions about whatever visible behavior is desired. Specifications can be stated in the whatever naive, simple, pure functional form makes the assertion easy to understand, whereas implementations often need to use arcane tricks and confusingly evolving mutable structures to make the algorithm efficient. If the specification is larger than the implementation I would tend to suspect one or both of them could be structured more intelligently.\n\n## Do you think this language will make all software perfectly secure?\n\nNo! Although it's certainly [very exciting to see how truly secure verified software can be](https://www.quantamagazine.org/formal-verification-creates-hacker-proof-code-20160920/), there will always be a long tail of hacking risk. Not all code will be written in securable languages, not all engineers will have the diligence or the oversight to write secure code, people can make bad assumptions, and brilliant hackers might invent entirely new *types* of attack vectors that aren't considered by our safety specifications (although inventing new attack vectors is obviously way more difficult than just doing some web searches and running scripts, which is all a hacker has to do today).\n\nHowever *any* verified software is better than *none*, and right now it's basically impossible for a security-conscious team to even attempt to prove their code secure. Hopefully the \"verification pyramid\" referred to earlier will enable almost all software to quickly reuse secure foundations provided by someone else.\n\nAnd of course, social engineering and hardware tampering are never going away, no matter how perfect our software is.\n\n## Is logically verifying code even useful if that code relies on possibly faulty software/hardware?\n\nThis is nuanced, but the answer is still yes!\n\nFirst let's get something out of the way: software is *literally nothing more* than a mathematical/logical machine. It is one of the very few things in the world that can actually be perfect. Of course this perfection is in regard to an axiomatic model of a real machine rather than the true machine itself. But isn't it better to have an implementation that's provably correct according to a model rather than what we have now, an implementation that's obviously flawed according to a model? Formal verification is really just the next level of type checking, and type checking is still incredibly useful despite also only relating to a model.\n\nIf you don't think a logical model can be accurate enough to model a real machine in sufficient detail, please check out these papers discussing [separation logic](http://www0.cs.ucl.ac.uk/staff/p.ohearn/papers/Marktoberdorf11LectureNotes.pdf), extremely high fidelity formalizations of the [x86](http://nickbenton.name/hlsl.pdf) and [arm](https://www.cl.cam.ac.uk/~mom22/arm-hoare-logic.pdf) instruction sets, and [Iris](https://people.mpi-sws.org/~dreyer/papers/iris-ground-up/paper.pdf). Academics have been busy doing amazing stuff, even if they haven't been sharing it very well.\n\nIf you think we'll constantly be tripping over problems in incorrectly implemented operating systems or web browsers, well you're missing the whole point of this project. These systems provide environments for other software yes, but they're still just software themselves. Even if they aren't perfectly reliable *now*, the entire ambition of this project is to *make* them reliable.\n\nWe would however need hardware axioms to model the abstractions provided by a concrete computer architecture, and this layer is trickier to be completely confident in. Hardware faults and ambient problems of all kinds can absolutely cause unavoidable data corruption. Hardware is intentionally designed with layers of error correction and redundancy to avoid propagating corruption, but it still gets through sometimes. There's one big reason to press on with formal verification nonetheless: the possibility of corruption or failure can be included in our axioms!\n\nFirmware and operating systems already include state consistency assertions and [error correction codes](https://en.wikipedia.org/wiki/Error_detection_and_correction), and it would be nice if those checks themselves could be verified. The entire purpose of trackable effects is to allow environmental assumptions to be as high fidelity and stringent as possible without requiring every piece of software to actually care about all that detail. This means the lowest levels of our verification pyramid can fully include the possibility of corruption and carefully prove it can only cause a certain amount of damage in a few well-understood places. Then the higher levels of the pyramid can build on top of that much sturdier foundation. Additionally the concept of [corruption panics](./posts/design-of-magmide.md#corruption-panics) would allow software to include consistency checks even in situations that are logically impossible, to account for situations where the hardware has failed.\n\nYes it's true that we can only go so far with formal verification, so we should always remain humble and remember that real machines in the real world fail for lots of reasons we can't control. But we can go much much farther with formal verification than we can with testing alone! Proving correctness against a mere model with possible caveats is incalculably more robust than doing the same thing we've been doing for decades.\n\n## Why can't you just teach people how to use existing proof languages like Coq?\n\nThe short answer is that languages like Coq weren't designed with the intent of making formal verification mainstream, so they're all pretty mismatched to the task. If you want a deep answer to this question both for Coq and several other projects, check out [`posts/comparisons-with-other-projects.md`](./posts/comparisons-with-other-projects.md).\n\nThis question is a lot like asking the Rust project creators \"why not just write better tooling and teaching materials for C\"? Because instead of making something *awesome* we'd have to drag around a bunch of frustrating design decisions. Sometimes it's worth it to start fresh.\n\n<!-- ## How would trackable effects compare with algebraic effects?\n\nThere's a ton of overlap between the algebraic effects used in a language like [Koka](https://koka-lang.github.io/koka/doc/index.html) and the trackable effects planned for Magmide. Trackable effects are actually general enough to *implement* algebraic effects, so there are some subtle differences.\n\nOn the surface level the actual theoretical structure is different. Algebraic effects are \"created\" by certain operations and then \"wrap\" the results of functions. Trackable effects are defined by *starting* with some token representing a \"clean slate\", and then pieces of that token are given up to perform possibly effectful operations, and only given back if a proof that the operation is in fact \"safe\" is given.\n\nThis design means that trackable effects can be used for *any* kind of program aspect, from signaling conditions that can't be \"caught\" or \"intercepted\" (such as leaking memory), to notifying callers of the presence of some polymorphic control flow entrypoint that can be \"hijacked\".\n\nIt's important to also note that the polymorphic control flow use cases of algebraic effects could be achieved with many different patterns that no one would strictly call \"algebraic effects\". For example a type system could simply treat all the implicitly \"captured\" global symbols as the default arguments of an implicit call signature of a function, allowing those captured global signals to be swapped out by callers (if a function uses a `print` function, you could detect that capture and supply a new `print` function without the function author needing to explicitly support that ability). Or you could simply use metaprogramming to ingest foreign code and replace existing structures. For this reason trackable effects would be more focused on effects related to correctness and safety rather than control flow, despite the relationships between the two. -->\n\n## Isn't is undecidable to prove a program terminates or is correct?\n\nIf I was claiming Magmide could somehow ignore the problem of [undecidability](https://en.wikipedia.org/wiki/Decidability_(logic)) (or the [halting problem](https://en.wikipedia.org/wiki/Halting_problem), or [Rice's theorem](https://en.wikipedia.org/wiki/Rice%27s_theorem), or [Godel's incompleteness theorems](https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorems)) then this question would be a useful one. However I'm *not* claiming that, which means you just haven't understood Magmide and its goals.\n\nIt's impossible to write an algorithm that can *automatically* and *without any guidance* determine whether *any arbitrary program* terminates/meets some non-trivial semantic condition. However it is possible to write algorithms that can do so *some* of the time. And it's *always* possible to use dependent type theory to check whether a proof object successfully proves some proposition. *Checking* proofs is decidable, it's only *constructing* proofs that's in general undecidable. Researchers routinely prove that *particular* programs terminate or have certain characteristics, and they often have to manually write proofs to do so.\n\nNothing in any of these documents claims we can ignore proven truths of logic. Magmide is just trying to integrate proven concepts (proof assistants and bare metal compilers) into a nice package.\n\nI'm not an expert logician, and I'm happy to be corrected by more knowledgeable people. But if you're asking questions like this, you've simply misunderstood either Magmide or the referenced theorems.\n\n## Isn't formal verification impractical in practice?\n\nHistorically systems have been very impractical yes, with three commonly cited issues:\n\n- Extreme difficulty of composing proofs.\n- Overly long and burdensome correctness annotations.\n- Combinatorial explosion of proof terms or constraints, leading to unacceptable proof checking time.\n\nI'm not terribly worried about composability, since separation logic systems such as Iris have demonstrated how much improvement the right abstractions can give. And I'm betting design features such as [asserted types](./posts/design-of-magmide.md#builtin-asserted-types), [inferred annotations](https://arxiv.org/abs/2207.04034), and [inferred proof holes](./posts/design-of-magmide.md#inferred-proof-holes) would make composing verified functions much more ergonomic. Ergonomics and abstractions can be improved over time, especially for specific classes of problems. We shouldn't throw out the entire idea of verification just because previous systems have had poor ergonomics.\n\nI'm extremely excited about the already mentioned [\"Flux: Liquid Types for Rust\"](https://arxiv.org/abs/2207.04034) project, which demonstrated it's possible to ergonomically infer proof annotations. Essentially (mostly) all a programmer must do is add correctness conditions to *types* (just like [asserted types](./posts/design-of-magmide.md#builtin-asserted-types)) and (basically) all the other program annotations can be inferred. Flux then sends all those conditions to a solver and doesn't allow manual proofs for more complex conditions, but Magmide would allow manual proofs, meaning the correctness conditions could be arbitrarily interesting.\n\nAs for questions like combinatorial explosion of verification conditions, it's absolutely true that all the computational work necessary to verify software can indeed be very expensive, especially if the proof system in question is fully automated and just generates a massive list of constraints to solve.\n\nA few techniques can help us improve the situation:\n\n- [Incremental compilation of proof terms](https://github.com/salsa-rs/salsa).\n- [Computational reflection](https://gmalecha.github.io/reflections/2017/speeding-up-proofs-with-computational-reflection). For many specific problem domains it's possible to write very targeted decidable algorithms to find proofs or at least discharge many trivial proof obligations (the Rust borrow checker is an example!). Since such algorithms are narrowly targeted at a specific domain, they can perform much better than a general purpose tactic or constraint solver.\n- Allowing manual/interactive proofs rather than requiring full automation. This may seem like a cop-out, and it certainly adds work for engineers, but if some theorem is simple to manually prove but would lead an automated system on a costly run through a massive search space, it's probably worth the time.\n\nJust like ergonomics, compiler performance can be improved over time. Type systems can potentially add a huge amount of usability pain and compilation cost, but if the right design tradeoffs are found then type systems are well worth the trouble. Proof systems are simply much more advanced type systems, and I'm willing to bet the combination of Iris and a few of the design ideas I've referenced can achieve a worthwhile set of tradeoffs.\n\n## Do you really think non-experts can meaningfully contribute here? Aren't you ignoring the difficult problems that researchers still haven't solved?\n\nThis question is a useful one to ask, but I ultimately think it's wrong-headed.\n\nI make this claim: **the most important bottleneck to the broader adoption and application of formal methods isn't unsolved research problems, but the \"day one\" problems of ergonomic usability and connected reusability.** Importantly, I only make this claim because Iris exists, which demonstrated the ability to verify extremely complex and realistic Rust code.\n\nMost of the software that's written every day isn't that complicated. Most of the correctness conditions people will actually care to prove will either relate to safety/security or to general robustness (not leaking memory, not throwing exceptions, not going into infinite loops/recursions), conditions that have been very rigorously explored by researchers. The research cutting edge is lightyears ahead of engineering practice, and we don't have to apply the full depth of theory to get huge payoffs in the general safety and stability of software.\n\nResearchers will continue to find solutions to difficult theoretical problems, which is great. But as long as their solutions only exist in difficult to reuse media such as Coq or pdf papers, those solutions will barely matter. Amazing theoretical progress hasn't truly fulfilled its purpose until it has *somehow* been applied to the real world.\n\nSo instead of saying \"we should wait for researchers to solve all these difficult problems\", I propose we build a highly usable system *now* with the theory we already have. If such a system existed, even researchers would benefit, since they would have a place to contribute further breakthroughs that would give them more visibility and support and return contributions. Magmide just wants to give both industrial engineers and academic researchers a solid foundation, one they can share and build up together.\n\n<!--\nFurthermore, this question reveals a fundamental lack of respect for industrial engineers. It's certainly true that market incentives and a loose startup culture have made many programmers undisciplined and flippant about quality and robustness. But not all practitioners have the same incentives and culture, and a large body of them (including myself!) care deeply about these questions. These engineers might have even realized that their life gets *easier* and their code velocity *faster* when they use more robust systems, and so will be glad to [\"get the hangover first\"](https://www.youtube.com/watch?v=ylOpCXI2EMM&t=565s&ab_channel=Rust), especially if they can do so incrementally.\n\nAcademic researchers are not a separate race of super geniuses who are the only ones capable of understanding formal methods. Academics are simply given access to the time and social network necessary to understand a literature that seems to intentionally shun outsiders.\n-->\n\n## Why build a system focused on engineers when even academics don't always use proof assistants? Shouldn't we try to build a system researchers will use first?\n\nNo. If you create a tool that allows practical verification of real software systems, primarily intended for approachable use by engineers, you'll necessarily have created a theorem prover that's enjoyable and ergonomic to use, and that supports easy sharing and reuse of proof labor across an entire community.\n\nThat design doesn't in any way preclude supporting the patterns that researchers like (using/supporting homotopy type theory, allowing concise notation using a flexible metaprogramming system, rendering proofs as latex/pdf/html/whatever documents). A highly metaprogrammable bare metal proof assistant would attract researchers, but a beautiful theorem prover without any special capability to reason about or compile bare metal code wouldn't attract engineers.\n\nThink about it: tons of researchers use python to analyze data or automate common tasks, or focus their research on the details of C or Rust or some specific instruction set architecture. Many fewer use Coq or do research about Coq. In general, at least in computing, researchers tend to follow industrial engineers.\n\nThe verification use cases engineers care about are more specific and fully implied by those that researchers care about. If we nail the use cases engineers care about, we'll get the use cases researchers care about basically for free.\n\n## Isn't most software too fuzzy or quickly evolving to make verification worth the effort?\n\nYes, many systems don't really have a clear definition of \"correct\", but that doesn't mean *aspects* of the system aren't worth verifying, or that it wouldn't be worth building that system *using* verified tools.\n\nWe don't have to be able to verify every facet of every program to make verification worth the effort, we just have to be able to prove enough useful things that we can't already prove with existing type systems.\n\nRefer to the concept of the [verification pyramid discussed above](https://github.com/magmide/magmide#fully-reusable).\n\n## Why bother writing code and then verifying it when we could instead simply generate code from specifications?\n\nGenerating code based on specifications is an extremely cool idea! [Some researchers have already made extremely interesting strides in that direction.](https://plv.csail.mit.edu/fiat/)\n\nIt seems impossible to always generate code for *any* specification, since some specifications aren't true or are undecidable. I'm not even sure it would always be possible for even relatively mundane code (reach out to me if you know more about the related theory!)\n\nRegardless of the theoretical limits of the approach, deductive synthesis systems have to be built *from* something, and compile *to* something. That something ought to be a proof language capable of bare metal performance, so Magmide would be a perfect fit for creating deductive synthesis systems.\n\n## How far are you? What remains to be done?\n\nVery early, and basically everything remains to be done! I've been playing with models of very simple assembly languages to get my arms around formalization of truly imperative execution. Especially interesting has been what it looks like to prove some specific assembly language program will always terminate, and to ergonomically discover paths in the control flow graph which require extra proof justification. I have some raw notes and thoughts about this in [`posts/toward-termination-vcgen.md`](./posts/toward-termination-vcgen.md). Basically I've been playing with the design for the foundational computational theory.\n\nIn [`posts/design-of-magmide.md`](./posts/design-of-magmide.md) I outline my guess at the project's major milestones. Obviously a project as gigantic as this can only be achieved by inspiring a lot of hardworking people to come and make contributions, so each milestone will have to show exciting enough capability to make the next milestone happen.\n\nRead [this blog post discussing my journey to this project](https://blainehansen.me/post/my-path-to-magmide/) if you're interested in a more personal view.\n\n<!--\n## Should I financially support this project?\n\nI (Blaine Hansen, the maintainer and author of this document) have recently enabled Github Sponsors for Magmide, but **you likely shouldn't sponsor the project yet**. You are likely to be disappointed at how little influence a small sponsorship has on the speed of progress.\n\nThe ambition of this project means it's pretty \"all or nothing\". If the project never reaches the point where we've bootstrapped an initial version of the compiler, it would be difficult to say the project has provided any value at all. The chasm between here and there is pretty wide, with the most harrowing step being the definition of the language theory (operational semantics and custom weakest-precondition proposition to instantiate Iris).\n\nI'm confident I could get the project to that point if I had some help/pointers from Iris experts *and the freedom to work on this project full-time*. I'm not at all confident I can do so in my nights and weekends, even with occasional code contributions from others. I've actually been looking around at various ways to support the project at the level of a full-time pursuit, but haven't been able to find anything that makes sense. The most natural path to complete a project like this would be to pursue it in a PhD program, and as exciting as that would be it isn't possible because of a flurry of personal constraints.\n\nIf you know of a company that would be willing to make a series of short bets on an unproven researcher, then please let me know. Otherwise the volume of support that will come through Github Sponsors is unlikely to materially effect how much time I or anyone else will have to work on this project. I'm not going to make anyone a promise I'm not sure I can keep.\n-->\n\n## This is an exciting idea! How can I help?\n\nJust reach out! Since things are so early there are many questions to be answered, and I welcome any useful help. Feedback and encouragement are welcome, and you're free to reach out to me directly if you think you can contribute in some substantial way.\n\nIf you would like to get up to speed with formal verification and Coq enough to contribute at this stage, you ought to read [Software Foundations](https://softwarefoundations.cis.upenn.edu/), [Certified Programming with Dependent Types](http://adam.chlipala.net/cpdt/html/Cpdt.Intro.html), [this introduction to separation logic](http://www0.cs.ucl.ac.uk/staff/p.ohearn/papers/Marktoberdorf11LectureNotes.pdf), and sections 1, 2, and 3 of the [Iris from the ground up](https://people.mpi-sws.org/~dreyer/papers/iris-ground-up/paper.pdf) paper. You might also find my unfinished [introduction to verification and logic in Magmide](./posts/intro-verification-logic-in-magmide.md) useful, even if it's still very rough.\n\nHere's a broad map of all the mad scribblings in this repo:\n\n- `theory` contains exploratory Coq code, much of which is unfinished. This is where I've been playing with designs for the foundational computational theory.\n- `src`, `plugins`, and `test_theory` contains Rust, Ocaml, and Coq code representing the current skeleton of the [initial bootstrapping toolchain](./posts/design-of-magmide.md#project-plan).\n- `posts` has a lot of speculative writing, mostly to help me nail down the goals and design of the project.\n- `notes` has papers on relevant topics and notes I've made purely for my own learning.\n- `notes.md` is a scratchpad for raw ideas, usually ripped right from my brain with very little editing.\n- `README.future.md` is speculative writing about a \"by example\" introduction to the language. I've been toying with different syntax ideas there, and have unsurprisingly found those decisions to be the most difficult and annoying :cry:\n\nThank you! Hope to see you around!\n\n---\n\n# What could we build with Magmide?\n\nA proof checker with builtin support for metaprogramming and verification of assembly languages would allow us to build any logically representable software system imaginable. Here are some rough ideas I think are uniquely empowered by the blend of capabilities that would be afforded by Magmide. Not all of these ideas are *only* possible with full verification, but I feel they would get much more tractable.\n\n## Truly eternal software\n\nThis is a general quality, one that could apply to any piece of software. With machine checked proofs, it's possible to write software *that never has to be rewritten or maintained*. Of course in practice we often want to add features or improve the interface or performance of a piece of software, and those kind of expected improvements can't be anticipated enough to prove them ahead of time.\n\nBut if the intended function of a piece of software is completely understood and won't significantly evolve, it's possible to get it right *once and for all*. Places where this would be a good idea are places where it's hard to get to the software, such as in many embedded systems like firmware, IOT applications, software in spacecraft, etc.\n\n## Safe foreign code execution without sandboxing\n\nIf it's possible to prove a piece of code is well-behaved in arbitrary ways then it's possible to simply run foreign and untrusted code without any kind of sandboxing or resource limitations, as long as that foreign code provides a consistent proof object demonstrating it won't cause trouble.\n\nWhat kind of performance improvements and increased flexibility could we gain if layers like operating systems, hypervisors, or even internet browsers only had to type check foreign code to know it was safe to execute with arbitrary system access? Of course we still might deem this too large a risk, but it's interesting to imagine.\n\n## Verified critical systems\n\nMany software applications are critical for safety of people and property. It would be nice if applications in aeronautics, medicine, industrial automation, cars, banking and finance, decentralized ledgers, and all the others were fully verified.\n\n## Secure voting protocols\n\nIt isn't good enough for voting machines to be provably secure, the voting system itself must be cryptographically transparent and auditable. The [ideal requirements](https://en.wikipedia.org/wiki/End-to-end_auditable_voting_systems) are extremely complex, and would be very difficult to get right without machine checked proofs.\n\nVoting is sufficiently high stakes that it's extremely important for a voting infrastructure to not simply be correct, but be *undeniably* correct. I imagine it will be much easier to assert the fairness and legitimacy of voting results if all the underlying code is much more than merely audited and tested.\n\n## Universally applicable type systems\n\nThings like the [Underlay](https://research.protocol.ai/talks/the-underlay-a-distributed-public-knowledge-graph/) or the [Intercranial Abstraction System](https://research.protocol.ai/talks/the-inter-cranial-abstraction-system-icas/) get much more exciting in a world with a standardized proof checker syntax to describe binary type formats. If a piece of data can be annotated with its precise logical format, including things like endianness and layout semantics, then many more pieces of software can automatically interoperate.\n\nI'm particularly excited by the possibility of improving the universality of self-describing apis, ones that allow consumers to merely point at some endpoint and metaprogrammatically understand the protocol and type interface.\n\n## Truly universal interoperability\n\nAll computer programs in our world operate on bits, and those bits are commonly interpreted as the same few types of values (numbers, strings, booleans, lists, structures of those things, standardized media types). In a world where all common computation environments are formalized and programs can be verified to correctly model common logical types in any of those common computation environments, then correct interoperation between those environments can also be verified!\n\nIt would be very exciting to know with deep rigorous certainty that a program can be compiled for a broad host of architectures and model the same logical behavior on all of them.\n\n## Semver enforcing and truly secure package management\n\nSince so much more knowledge of a package's api can be had with proof checking and trackable effects, we can have distributed package management systems that can enforce semver protocols at a much greater granularity and ensure unwanted program effects don't accidentally (or maliciously!) sneak into our dependency graphs.\n\n## Invariant protection without data hiding\n\nIn many languages some idea of encapsulation or data hiding is supported by the language, to allow component authors to ensure outside components don't reach into data structures and break invariants. With proof checking available, it's possible to simply encode invariants directly alongside data, effectively making arbitrary invariants a part of the type system. When this is true data no longer has to be hidden at the type system level. We can still choose to make some data hidden from documentation, but doing so would simply be for clarity rather than necessity.\n\nRemoving the need for data hiding allows us to reconsider almost all common software architectures, since most are simply trying to enforce consistency with extra separation. Correct composition can be easy and flexible, so we can architect systems for greatest performance or clarity and remove unnecessary walls. For example strict microservice architectures might lose much of their usefulness.\n\n## Flattened async executor micro-kernel operating system\n\nThe process model is a very good abstraction, but the main reason it's useful is because it creates hard boundaries around different programs to prevent them from corrupting each other's state. Related to the above point, what if we don't have to do that anymore? What if code from different sources could simply inhabit the same memory space without much intervention?\n\nThe Rust community has made some very innovative strides with their asynchronous executor implementations, and I am one person who believes the \"async task\" paradigm is an extremely natural way to think about system concurrency and separation. What if an async task executor could simply be the entire operating system, doing nothing but managing task scheduling and type checking new code to ensure it will be well-behaved? In this paradigm, the abstractions offered by the operating system can be moved into a *library* instead of being offered at runtime, and can use arbitrary capability types to enforce permissions or other requirements. Might such a system be both much more performant and simpler to reason about?\n\n## Metaprogrammable multi-persistence database\n\nMost databases are designed to run as an isolated service to ensure the persistence layer is always in a consistent state that can't accidentally be violated by user code. With proof invariants this isn't necessary, and databases can be implemented as mere libraries.\n\nImmutable update logs have proven their value, and with proof checking it would be much easier to correctly build \"mutable seeming\" materialized views based on update commands. Databases could more easily save multiple materialized views at different scales in different formats.\n\n## More advanced memory ownership models\n\nRust has inspired many engineers with the beautiful and powerful ideas of ownership and reference lifetimes, rooting out many tricky problems before they arise.\n\nHowever the model is too simple for many obviously correct scenarios, such as mutation of a value from multiple places within the same thread, or pointers in complex data structures that still only point to ownership ancestors or strict siblings such as is the case in doubly-linked lists. More advanced invariants and arbitrary proofs can solve this problem.\n\n## Reactivity systems that are provably free from leaks, deadlocks, and cycles\n\nReactive programming models have become ubiquitous in most user interface ecosystems, but in order to make sense they often rely on the tacit assumption that user code doesn't introduce resource leaks or deadlocks or infinite cycles between reactive tasks. Verification can step in here, and produce algorithms that enforce tree-like structures for arbitrary code.\n"
  },
  {
    "path": "iris-notes.md",
    "content": ">\n  “number of steps of computation that the program may perform”. This intuition is not entirely\n  correct, but it is close enough.\n\n  VJAKδ is now a predicate over both a natural number k ∈ N and a closed value v.\n  Intuitively, (k,v) ∈ VJAKδ means that no well-typed program using v at type A will “go\n  wrong” in k steps (or less).\n\nwhat does it mean for something to hold for k steps?\n\n\n>\n  iProp is obtained from a more general construction: uniform predicates over\n  a unital resource algebra M, written UPred(M).\n\n  The type UPred(M) consists of predicates over step-indices and resources (from M) which\n  are down-closed with respect to the step-index and up-closed with respect to the resource:\n\n  UPred(M) := {P ∈ Prop(N, M) | ∀(n, a) ∈ P. ∀m, b. m ≤ n ⇒ a ;included b ⇒ (m, b) ∈ P}\n\nso if some (n, a) is \"proven\", then so is any (m, b) where both m is `<=` (earlier than or same?) n and b is `>=` (includes or same) a\nso you can take a valid (n, a) and make it either closer in number of steps or involving a larger piece of resource algebra state?\n\n\n- how does step-indexing *actually* relate to program steps?\n- are the step indexes only ever `infinity` or `1`?\n\n```\nIn the base case, when the argument is a value v, we have to prove the postcondition Q(v)\n(after potentially) updating the ghost state. Otherwise, if e is a proper expression, we get to\nassume the state interpretation SI(h) (explained below) and have to show two conditions:\n(1) the current expression e can make progress in the heap h where progress(e, h) :=\n∃e\n0\n, h0\n. (e, h) ❀ (e\n0\n, h0\n) and (2) for any successor expression e\n0 and heap h\n0\n, we have to\nshow the weakest precondition and the state interpretation after an update to the ghost\nstate and after a later.\nThe updates in both cases makes sure that we can always update our ghost state when\nwe prove a weakest precondition. These updates are instrumental for working with the\nstate interpretation below and for verifying code which relies on auxiliary ghost state.\nThe later in the second case ensures that the weakest precondition can be defined as a\nguarded fixpoint. Moreover, it ties program steps to laters in our program logic (i.e., in the\nrules LaterPureStep, LaterNew, LaterLoad, and LaterStore). In fact, this later in the\ndefinition of the weakest precondition is responsible for the intuition: “. P means P holds\nafter the next step of computation”. More concretely, if one proves a weakest precondition\nwp e {v. Q(v)} under the assumption . P then, after the next step of computation, the goal\nbecomes .wp e\n0 {v. Q(v)}. We can then use the rule LaterMono to remove the later in\nfront of wp e\n0 {v. Q(v)} and in front of . P.\n```\n\n\n\nthe prefix `TC` is \"typeclass\" and comes from stdpp. it seems they've redefined a bunch of the basic operators in coq (eq, and, or, forall, etc) as typeclasses?\n\n\n\n\n`bi` == bunched implications, which is just the logical ideas of separation logic (* operator as resource composition, -* like a \"resource function\" that can take resources and transform them, etc)\n\n`si` == step-indexed, still don't entirely get the intuition behind step indexed relations, but whatever\n\n`coPset` == set of positive binary numbers. `co` is for the idea of \"cofiniteness\"? a subset is `co`finite if it's `co`mplement is finite.\nit looks like `coPset`s are used as the \"masks\"? the sets that hold ghost variable/invariant names?\n\n`E` is generally used for masks\n\n`Canonical` is just a command for making some typeclass instance available to coq's type inference, so it can be found automatically\n\n`Structure` is the same as `Record`!!!!\n\n`lb` == lower bound\n\n`%I` means to resolve in `bi_scope`\n\nLeibniz equality is the kind where two things are equal if all propositions that are true for one are true for the other\n\n\n`|==>` is `bupd`, or basic update\n\n`P ==∗ Q` is `(P ⊢ |==> Q)`, so P entails you can get an updatable Q, using separation logic entailment\nconfusingly it can also mean `(P -∗ |==> Q)` in bi_scope?\n\n```\nClass BUpd (PROP : Type) : Type := bupd : PROP → PROP.\nNotation \"|==> Q\" := (bupd Q) : bi_scope.\nNotation \"P ==∗ Q\" := (P ⊢ |==> Q) (only parsing) : stdpp_scope.\nNotation \"P ==∗ Q\" := (P -∗ |==> Q)%I : bi_scope.\n\nClass FUpd (PROP : Type) : Type := fupd : coPset → coPset → PROP → PROP.\nNotation \"|={ E1 , E2 }=> Q\" := (fupd E1 E2 Q) : bi_scope.\nNotation \"P ={ E1 , E2 }=∗ Q\" := (P -∗ |={E1,E2}=> Q)%I : bi_scope.\nNotation \"P ={ E1 , E2 }=∗ Q\" := (P -∗ |={E1,E2}=> Q) : stdpp_scope.\n\nNotation \"|={ E }=> Q\" := (fupd E E Q) : bi_scope.\nNotation \"P ={ E }=∗ Q\" := (P -∗ |={E}=> Q)%I : bi_scope.\nNotation \"P ={ E }=∗ Q\" := (P -∗ |={E}=> Q) : stdpp_scope.\n```\n\nIn general the `▷=>^ n` syntax indicates a number of steps `n` accompanying the mask update?\n\n\n`wsat` is world satisfaction\n\n\n\nin the context of ofes `dist` means distance\n> The type `A -n> B` packages a function with a non-expansiveness proof\n\n> When an OFE structure on a function type is required but the domain is discrete,\none can use the type `A -d> B`.  This has the advantage of not bundling any\nproofs, i.e., this is notation for a plain Coq function type.\n\n> When writing `(P)%I`, notations in `P` are resolved in `bi_scope`\n\nso it looks like the suffix `I` means internal\n\n\n`■ (P)` means \"plainly P\", meaning P holds when no resources are available\n\n`Λ` is generally an instance of a `language`\n\nit seems `tp` is generally a thread pool?\n\nit seems `upd` is update\nand `bupd` is basic update\nand `fupd` is fancy update\n\n\nIt seems the suffix `G` is used to mean \"in global\"\n\n\nthe only purpose of \"later\" is to prevent the kinds of infinite loops that can make a logic invalid (able to prove False). it's used to define propositions like weakest preconditions that must somehow bake the idea of \"the program takes a step\" into their meaning\n\nordered families of equivalences (ofe's) are just a \"convenient\" (if you can call them that) way of encoding \"steps\" into the system. ofe's make the equivalence of some pieces of data dependent on a step index, so pieces of data might be equivalent at some indexes but not others.\nbut most of the time the step indexes don't matter! most actual *data types* aren't recursive or hold some concept of computational steps in them, so the \"equivalences\" hold for *all* step indexes!\n\na \"cmra\" or \"camera\" is the fully general version of a resource algebra that actually uses the idea of step-indexed equality.\n\n\n\njust copying a chunk of `docs/resource_algebras.md`:\n\n>\n  The type of Iris propositions `iProp Σ` is parameterized by a *global* list `Σ:\n  gFunctors` of resource algebras that the proof may use.  (Actually this list\n  contains functors instead of resource algebras, but you only need to worry about\n  that when dealing with higher-order ghost state -- see \"Camera functors\" below.)\n\n  In our proofs, we always keep the `Σ` universally quantified to enable composition of proofs.\n  Each proof just assumes that some particular resource algebras are contained in that global list.\n  This is expressed via the `inG Σ R` typeclass, which roughly says that `R ∈ Σ`\n  (\"`R` is in the `G`lobal list of RAs `Σ` -- hence the `G`).\n\n\n\niris\n  program_logic: it seems contains files related to the instantiation of iris and weakest preconditions for the general \"language\" concept with exprs and vals etc. I don't think I care except to look for patterns and examples\n\n  base_logic: is all the pay dirt in here?\n\n  bi: contains files related to bunched implications logic?\n  si_logic: contains files related to step-indexed logic?\n\n  algebra: contains files related to resource algebras?\n\n\n\n\n\n\n\n\n\n\nSo I'll have to define some `magmideG` typeclass and `magmideΣ` list of resource algebras and a `subG_magmideΣ` instance\n\n`inG` asserts some resource algebra is in a list\n`subG` asserts a list of resource algebras is contained in a list\n\n> The trailing `S` here is for \"singleton\"\n\nhmm\n\n```coq\nClass magmideG Σ := {\n  magmide_inG: inG Σ magmideR;\n  magmide_some_other_library: some_other_libraryG Σ\n}.\nLocal Existing Instances magmide_inG.\nLocal Existing Instances magmide_some_other_library.\n... other fields\n\nDefinition magmideΣ: gFunctors := #[GFunctor magmideR; some_other_libraryΣ].\n\nInstance subG_magmideΣ {Σ}: subG magmideΣ Σ → magmideG Σ.\nProof. solve_inG. Qed.\n\nSection proof.\n  Context `{!magmideG Σ, !otherthingsG Σ}.\nEndSection proof.\n```\n\n> The backtick (`` ` ``) is used to make anonymous assumptions and to automatically\ngeneralize the `Σ`.  When adding assumptions with backtick, you should most of\nthe time also add a `!` in front of every assumption.  If you do not then Coq\nwill also automatically generalize all indices of type-classes that you are\nassuming.  This can easily lead to making more assumptions than you are aware\nof, and often it leads to duplicate assumptions which breaks type class\nresolutions.\n\n"
  },
  {
    "path": "justfile",
    "content": "# build:\n# \tdune build\n\n# wget --no-check-certificate -O - https://apt.llvm.org/llvm-snapshot.gpg.key | sudo apt-key add -\n# add-apt-repository 'deb http://apt.llvm.org/bionic/   llvm-toolchain-bionic-13  main'\n# sudo apt install llvm-13 libclang-common-13-dev\nlab:\n\t#!/usr/bin/env bash\n\t# lli-13 lab.ll\n\tcargo run\n\tlli-13 lab.bc\n\techo $?\n\ntest:\n\tcargo test\n\tdune runtest\n\ndev:\n\tcargo test test_lex -- --nocapture\n\n\nclean:\n\t#!/usr/bin/env bash\n\tpushd theory\n\tmake clean\n\trm -f *.glob\n\trm -f *.vo\n\trm -f *.vok\n\trm -f *.vos\n\trm -f .*.aux\n\trm -f .*.d\n\trm -f Makefile*\n\trm -f .lia.cache\n\trm -f *.ml*\n\tpopd\n\nbuild:\n\t#!/usr/bin/env bash\n\tpushd theory\n\tmake\n\tpopd\n\nfullbuild:\n\t#!/usr/bin/env bash\n\tpushd theory\n\tcoq_makefile -f _CoqProject *.v -o Makefile\n\tmake clean\n\tmake\n\tpopd\n"
  },
  {
    "path": "lab.ll",
    "content": "; https://stackoverflow.com/questions/41716079/llvm-how-do-i-write-ir-to-file-and-run-it/41833643\n; https://stackoverflow.com/questions/7773194/is-it-possible-to-use-llvm-assembly-directly\n\n; https://ecksit.wordpress.com/2011/01/01/hello-world-in-llvm/\n; https://kripken.github.io/llvm.js/demo.html\n\n; @str = internal constant [19 x i8] c\"Hello LLVM-C world!\"\n\n; declare i32 @puts(i8*)\n\ndefine i32 @main() {\ndoit:\n\t; https://blog.yossarian.net/2020/09/19/LLVMs-getelementptr-by-example\n\t; %0 = call i32 @puts(i8* getelementptr inbounds ([19 x i8], [19 x i8]* @str, i32 0, i32 0))\n\t%0 = add i32 3, 4\n\t%1 = add i32 %0, %0\n\tret i32 %1\n}\n"
  },
  {
    "path": "mg_examples/main.mg",
    "content": "type Day;\n\t| Monday\n\t| Tuesday\n\t| Wednesday\n\t| Thursday\n\t| Friday\n\t| Saturday\n\t| Sunday\n\nproc next_weekday(d: Day): Day;\n\tmatch d;\n\t\tDay.Monday => Day.Tuesday\n\t\tDay.Tuesday => Day.Wednesday\n\t\tDay.Wednesday => Day.Thursday\n\t\tDay.Thursday => Day.Friday\n\t\t_ => Day.Monday\n\nproc same_day(d: Day): Day;\n\td\n\nprop Eq(@T: type): [T, T];\n\t(t: T): [t, t]\n\nthm example_next_weekday: Eq[next_weekday(Day.Saturday), Day.Monday];\n\tEq(Day.Monday)\n"
  },
  {
    "path": "notes/2019-popl-iron-final.md",
    "content": "pretty simple so far, just saying none of the concurrent separation logics enable tracking *obligations*, merely correctness in the sense of not *doing* something incorrect, rather than *incorrectly forgetting to do something necessary*.\n\nthis is a problem whenever we're using persistent/duplicable/shareable invariants, which can be copied arbitrarily to be given to different threads. doing this is necessary in fork-style concurrency (vs \"structured\" concurrency in which the language syntax itself determines where invariants exist).\nsince they're duplicable, they can be thrown away\n\nthe main way they're going to solve this problem is with what they're calling \"trackable resources\"\nthe first one is the \"trackable points-to connective\" `l ->_pi v`, where pi is a rational number describing what fraction of the heap we have control or knowledge of. `pi = 1` means we own the whole thing, and `pi < 1` means someone else has some control\n\nthen they define Iron++, which defines \"trackable invariants\" (rather than resources), and Iron++ is linear rather than affine (it doesn't have the weakening rule, so you can't throw away resources). this means these invariants aren't duplicable, but instead have to be \"split\"\n\ngetting into it, they define some rules, in which the `e_pi` proposition is like an empty heap, equivalent to the permission to allocate.\n\nemp-split:\n`e_pi1 ∗ e_pi2 <-> e_(pi1 + pi2)`\n\npt-split:\n`(l -->_pi1 v) * (e_pi2) <-> (l -->_(pi1 + pi2) v)`\n\nsince `e_pi` propositions allow us to demonstrate we've deallocated memory, we can prove a program doesn't leak memory by giving it a hoare triple of `{ e_pi } program { e_pi }` where pi is equal in pre and post, for any pi\n\n\nI got all I needed from this paper I think\n"
  },
  {
    "path": "notes/assembly-proofs.md",
    "content": "this paper is mostly just a reimplementation of vale in f*, but with a more efficient proof reflection style verification condition generator\nthe generator is more efficient just because it's a polynomial time algorithm that checks all the easily decidable stuff and defers everything else to a solver. whatevs\n"
  },
  {
    "path": "notes/category-theory-for-programmers.md",
    "content": ""
  },
  {
    "path": "notes/coq-coq-correct.md",
    "content": "something I have to look at is how metaprogramming works in a bunch of these other languages, metacoq and f* metaprogramming\n\n> This paper proposes to switch from a trusted code base to a trusted theory base paradigm!\n\nokay I can't read this yet, I have to read metacoq\n"
  },
  {
    "path": "notes/coq-metacoq.md",
    "content": "okay reading this has actually been helpful\nI'm still a little hazy on all the typing rules of cic, I guess mostly that they seem to be not as complex as I would assume them to be. honestly the coq reference is almost certainly a better place to understand all of that.\n\nhowever now that I actually understand metacoq and how to use it, I intend to use it to play around with a simpler way of declaring everything, such as a `type A = ` oriented way of doing things\n"
  },
  {
    "path": "notes/indexing-foundational-proof-carrying-code.md",
    "content": "so far this paper is really simple, it's just saying what proof-carrying-code (PCC) is and why it's valuable. he's also saying it would be great for these systems to not assume a particular type-system, but instead just be rooted in mathematics/logic.\n\nVC generator: verification condition generator (akin to a tactic that examines code and infers hoare triples?)\n\nso the first 4 sections of this paper are just talking about how we can specify the operational semantics of a physical machine and instruction set, then define program state safety and program safety in terms of the step relation given by the operational semantics. pretty simple! especially interesting is the idea of a safe *program*, which depends on the program being written in a *position independent* manner (which I suppose would mean all instructions merely reference offsets from the program counter).\n\nsee now in section 5 he's talking about *typed* intermediate representations, which is dumb! metaprogrammable recombination forever!\n\nhe's also talking about the difference between syntactic and semantic type representation. I guess the core difference is that syntactic type representation is *opaque*, the syntax rules are basically assigned axiomatically. whereas semantic ones are rooted in actual logic, so all the transformation rules can be derived from the underlying meaning of the types.\n\nbut now we're getting to \"recursive contravariance?\" and how it makes step-indexing necessary? I'm almost there.\n\nInstead of saying a type is a set of values, we say it is a set of pairs `<k, v>`, where k is an approximation index and v is a value. The judgement `<k, v>` ∈ τ means, \"v approximately has type τ, and any program that runs for fewer than k instructions can't tell the  difference.\"\" The indices k allow the construction of a well founded recursion, even when modeling contravariant recursive types.\n\nSo I guess the k-indexing is just a wrapper of some kind? I think contravariant recursion is just another way of saying it has to be strictly postive in the coq sense. an inductive constructor can't accept as an argument a function that itself takes the inductive type being defined as an argument, because this allows for infinite recursion and therefore unsoundness.\n"
  },
  {
    "path": "notes/indexing-indexed-model.md",
    "content": "this one is actually getting somewhere. it's basically the same paper as `indexing-foundational-proof-carrying-code` but actually gives some intuitions for what they're talking about with recursive types\n\nthe important thing it seems is this mu operator\n\n```\nµF ≡ {<k, v> | <k, v> ∈ F^(k+1)(⊥)}\n\nµ(F) = λkλv.∀τ. ncomp(F, k + 1, ⊥, τ) ⇒ τ k v\n\nwhere ncomp(F, k, g, h) means informally that F^k (g) = h\n\nncomp(f, n, x, y) can be defined as,\n  ∀ g.\n    (∀z. g(f, 0, z, z))\n    ⇒ (∀m, z1, z2 .m > 0 ⇒ g (f, m − 1, z1, z2 ) ⇒ g (f, m, z1, f(z2)))\n    ⇒ g (f, n, x, y).\n```\n"
  },
  {
    "path": "notes/indexing-modal-model.md",
    "content": "Before getting into the real paper, I'm going to quickly try to gain some clue about what modal logic and kripke semantics are, why they're useful, and how they might relate to step-indexing.\n\n# https://plato.stanford.edu/entries/logic-modal/\nIn general, modal logic is a logic where the truth of a statement is \"qualified\", using some \"mode\" like \"necessarily\" or \"possibly\"\n\nThere's a weak logic called `K` (after Saul Kripke) that includes ~, -> as usual, but also the `nec` operator for \"necessarily\". (written with the annoying box symbol □)\n\n`K` is just normal propositional logic with these rules added relating to the `nec`\n\nNecessitation Rule: If A is a theorem of K, then so is `nec(A)`.\n\nDistribution Axiom: `nec(A -> B) -> (nec(A) -> nec(B))`.\n\nThen there's the `may` operator (for \"possibly\" or \"maybe\", written with the annoying diamond symbol ◊).\nIt can be defined from `nec` by letting `may(A) = ~nec(~A)`, or \"not necessarily not A\". This means `nec` and `may` mirror each other in the same way `forall` and `exists` do.\n\nUh oh, there's a whole family of modal logics based on which axioms of \"simplification\" they include? They're saying which ones make sense depends on what area you're working in. I'm sure this will lead to fun situations in step-indexing.\n\nThe important part! **Possible Worlds**\n\nEvery proposition is given a truth value *in every possible world*, and different worlds might have different \"truthiness\".\n\n`v(p, w)` means that for some valuation `v`, propositional variable `p` is true in world `w`.\n\n~ := `v(∼A, w) = True <-> v(A, w) = False`\n-> := `v(A -> B, w) = True <-> v(A, w) = False or v(B, w) = True`\ntheorem 5 := `v(□A, w) = True <-> forall w': W, v(A, w') = True`\n^^^^^^^^^\ntheorem 5 is important! it seems this is the thing that makes it all make sense.\nsince `nec` and `may` are equivalent to \"all\" and \"some\" when thinking about possible worlds, theorem 5 implies that `may` is similar to `exists`, `◊A = ∼□∼A`\n`may` is true when the proposition is true in *some* worlds, but not necessarily all of them, or that we merely know that A isn't necessarily false *everywhere*.\n\nAh yeah hold on, theorem 5 isn't always reasonable for every kind of modal logic. in temporal logic, where a \"world\" is really just an \"instant\" (hint, this is almost certainly what we're dealing with in step-indexing), `nec` really means that something will *continue* to be true into the future, but may not have been in the past.\n\nin these cases, we have to define some relation R to define \"earlier than\"\n\ntheorem K := `v(□A, w) = True <-> forall w', (R(w, w') -> v(A, w')) = True`\n\nso essentially A is necessarily true in w if and only if forall worlds *that are later than w* A is still true\n\nso then a kripke frame `<W, R>` is a pair of a set of worlds W and a relation R.\n\nI'm skipping over a bunch of stuff that doesn't seem relevant for getting to step-indexing.\n\nOkay bisimulation is a place where this is useful.\nlabeled transition systems (LTSs) represent computation pathways between different machine states.\nAn easily understood quote:\n\n```\nLTSs are generalizations of Kripke frames, consisting of a set W of states, and a collection of i-accessibility relations Ri, one for each computer process i. Intuitively, Ri(w, w') holds exactly when w' is a state that results from applying the process i to state w.\n```\n\nThe last important thing I'll say: the properties (such as transitivity, or being a total preorder) of the *accessibility relation* R (it defines accessibility!) define what axioms are reasonable to use in some context.\n\n# moving onto the paper!\nhttps://www.irif.fr/~vouillon//smot/Proofs.html\n\nokay they're just talking about what they're trying to achieve, especially how we need recursive and quantified types (quantifed means that they may be generic or unknown, as is the case with something like `forall t: T`, where t is quantified) in order to represent tree structures in memory and other such things. types need to allow impredicativity, so types can refer to themselves\n\nthey talk a little about the difference between syntactic and semantic interpretations. The way I choose to understand this distinction is that syntactic rules can only refer to themselves and can't derive value from other systems, whereas semantic ones are merely embedded in some larger logical system that itself can be used to extend the rules.\n\nThis seems to point to an important distinction I've been missing:\n\n```\nWe start from the idea of approximation\nwhich pervades all the semantic research in the area. If we\ntype-check v : τ in order to guarantee safety of just the\nnext k computation steps, we need only a k-approximation\nof the typing-judgment v : τ .\n```\n\nThe important part is *next* k computation steps. It seems this implies that the type judgment maybe become false *after* k. This isn't how I was thinking about it, which was that the judgment *will become* true in k steps. The less-than relationship to k makes a lot more sense with this interpretation.\n\nThis also seems important:\n\n```\nWe express this idea here using a Kripke semantics whose possible-worlds accessibility\nrelation R is well-founded: every path from a world w into\nthe future must terminate. In this work, worlds characterize\nabstract computation states, giving an upper bound on the\nnumber of future computations steps and constraining the\ncontents of memory.\n```\n\nI'm a little scared about the implications of this \"every path must terminate\" thing. I'm hoping that doesn't mean we can't prove things about possibly non-terminating programs (maybe we could define infinite divergence as a terminating \"world\"?). Nope! They specify in a later section of the paper that we can still use this idea to prove things about *any finite prefix* of any program.\n\nI'll write down some of their base rules to help me remember:\n\nw ||- v: T\nmeans v has type T in world w\n\nU |- T\nmeans every value u of type U in any world w also has type T in w this seems equivalent to saying U is a subtype of T.\n\nThen the modal operator \"later\"!\n`lat` quantifies over all worlds (all times) *strictly in the future*\n\nthey point out that `nec` instead applies to *now* as well as the future. I guess this contradicts my intuition that the \"less than k\" steps thing is meaningful here\n\nMore seemingly important stuff\n\n```\nIndeed, the combination of a well-founded R with a strict modal operator `lat` provides a clean induction principle to the logic, called the Löb rule,\n\nlat(T) |- T\n-----------\n    |- T\n```\n\nSo if it is true that if a prop is True later then it is also true always, then it is always true always.\nIt seems this just means the later is meaningless, *or that there's nothing in the prop that depends on the world*.\n\n> In this section we will interpret worlds as characterizing abstract properties of the current state of computation. In particular, in a  system with mutable references, each world contains a memory typing\n\nIn different types of machines, a \"world\" is a different thing (a lambda calculus with a store is a pair of an expression and that store, a von Neumann machine is the pair of registers (including program pointer) and memory)\n\n```\nClearly, the same value v may or may not have type T depending on the world w, that is, depending on the data structures in memory that v points to. Accordingly, we call a pair (w, v) a configuration (abbreviated \"config\"):\n\nConfig = W x V,\n\nand define a type T ∈ Type as a set of configurations. Then,\n\n(w, v) ∈ T\nand\nw |- v : T\n\nare two alternative notations expressing the same fact.\n```\n\n> We will show how our semantics connects the relation R between worlds and the relation >-> between states.\n\nI guess they're saying there's some sort of correspondence between the R relation showing how \"worlds\" are accessible in time from one-another and the small step relation `>->` that shows how computation states are accessible from one-another. This makes sense since worlds and states are the same thing.\n\n\nSo a type is just a set of configurations, or a set of values pointing to something in some world. This is basically saying that a type is all values *who exist in a world* that makes the type assertions true. Yeah, they say \"a type is just any set of configurations\"\nbasic stuff like the top/bottom types, \"logical\" intersection/union, and function types are pretty easy to describe then.\n\nI'll put the first few in my own words:\n\ntop := {(w, v) | True}\nthe top type describes all configs! so it is a subtype of all configs\n\nbot := {}\nthe bot type is the empty set, so it describes no configs, so it is unrepresentable\n\nT /\\ U := T intersection U\ntype and set intersection are equivalent, since the intersection of type T and U is only the types that are described by both conditions\n\nT \\/ U := T union U\nsimilar idea, we smush together the types which means any config describe by either of them is valid\nrelated, discriminated unions then are the union of types which have no intersection, or where the description of each type necessarily precludes the other\n\nU => T := {(w, v) | (w, v) ∈ U => (w, v) ∈ T }\nthis is slightly more involved, but only because I'm not sure if he's talking about implication or functions. I'm going to guess implication, since there's no talk of substitution or anything like that\nall configs such that if the config is in U, it is also in T\n\n\nNow he gets into how quantification is represented in the type system. These are more interesting.\nimportantly, in the below, A can be either Type, Loc, or Mem.\n\nforall x:A.T := global_intersection<a in A>(T[a/x])\nokay, first parsing it:\nforall x which is an A, then T is defined as\nthe global intersection of all items a in A, for each which we've subsituted the a in the set which our variable x\nthat basically means that forall is the intersection set of all configs (or locs or mems) where...\nI'm not sure I get it yet. the exists below is similar just with union, so I'll wait until later to understand what's going on. hopefully he gives an applied example.\n\nexists x:A.T := global_union<a in A>(T[a/x])\n\nQuantification over values in a world.\npretty simple,\n\n!T := {(w, v) | forall v'. (w, v') in T}\nall values in the current world have type T\n\n?T := {(w, v) | exists v'. (w, v') in T}\nsome value in the current world has type T\n\n\nThen they brag how they can define types in terms of their primitives without using the underlying logic.\n\nT <=> U = T => U /\\ U => T\nT iff U, pretty simple (this confirms my suspicion that => was meant to indicate implication, although implication is isomorphic with functions, so there's something there as well)\n\nteq(T, U) = !(T <=> U)\nbasically type equality, since for all values in (the current world) the types are equivalent to each other\nthe dependence on the current world is the only part I don't love....\n\nworld types (which teq(T, U) is one of) are types that only depend on the world, not the value (I'm guessing persistent types are ones that depend on neither)\n\n\n\nOkay Vector Values is where I got kind of stuck before, let's write things out as we go to keep it clear.\n\n```\nwe have locations `l: Loc`, that index a mutable store m;\nstorable values `u: SV` that are the range of m (contents of memory cells);\nand values `v: V`.\n\nWe assume Loc subset SV (meaning locations are storable values, but there are more storable values than just locations)\n\nOn a von Neumann machine, SV = Loc (so locations *do* in fact fully describe storable values)\nand v is a vector of locations (one could think of a register-bank) indexed by a natural number j.\nThat is, if v is a value, then v(j) is a Loc. (meaning a \"value\" is a terrible name for what they're talking about! value is a register bank, so v(j. but they're using value in the config sense of a (w, v), or a world and a *value*. this means they're saying the world is the state of memory and the value is the state of the registers, at least on a von Neumann machine) is choosing a particular register to grab a Loc from. Magmide will make this clearer by just making all things byte arrays and lists of byte arrays)\n```\n\nThis part is where is gets hairier:\n\n```\nIn order to type locations, we choose an injective function (a function that is one-to-one) `.->` from storable values to values (ints to register banks), for instance\n`u-> := lambda j. u`\nThis way the same set of types can be used for all kinds of values.\n```\n\nThe \"in order to type locations\" is important. I'm hoping this will become more clear. I understand all the parts of that sentence, but not the purpose of the sentence.\n\nI think it becomes clearer with the \"This way the same set of types can be used for all kinds of values.\" They're talking about *world/value/config* values in this context, so I guess this injective function is trying to produce some kind of equivalence between von Neumann machines and lambda calculus.\n\nThis is even less clear\n\n```\nIn lambda-calculus, `SV = V` is the usual set of values, so we have `Loc strictsubset SV` by syntactic inclusion, and we take `u-> := u`\n```\n\nagain I understand the parts but not the sentence.\nperhaps they're saying that in lambda calculus the store can hold anything, and the \"value\" of a von Neumann machine is the current expression being reduced, so there isn't a need for this injective function? I'm still not sure what the injective function is for.\n\n\nSingletons and slots.\n\nI don't want to get stuck on this stuff.\n\nbased on this definition of the single type `just u` (the single storable value (SV) u)\n\njust u := {(w, v) | v = u->}\n\nI'm going to choose to believe that the injective arrow function is just saying that the value (register bank) v *can possibly produce u*????\n\nand then\n\nu: T = !(just u => T)\n: here means \"has type\" in the more traditional sense\nso for all values in the current world, if the value is u, then it has type T\n\n\nexists l: Loc. just l /\\ w(l)\n\n\nOkay this makes more sense:\n\nThe type `slot(j, T)` characterizes values v such that the jth slot has type T.\n\nslot(j, T) := {(w, v) | w ||- v(j): T}\nall configs such that in the current world the storable value at slot j has type T\n\nI think all this stuff was simpler than they made it seem, by this sentence:\n\n> To say that register 2 has the value 3 we write slot(2, just 3).\n\n\nNow on to the important stuff,\n\n## Necessity and the modal operator \"later\"\n\nGiven two types U and T, we write\nU |- T\nwhen the type U is a subset of the type T, meaning\nfor every world w and value v,\n\nw ||- v: U\nimplies\nw ||- v: T\n\n(if a value is a U, then it is also a T, so Us can be replaced with Ts, U is a subtype of T)\n\nWe write\n|- T\nto mean\ntop |- T\n(so we don't assume any (useful) types are subsets of T, only the top type, which is a subtype of all types)\n\n\nThe accessibility relation R has to be transitive and well-founded, such as the less-than (<) relation.\n\nSo R(w, w') means the world w' comes at a strictly later stage than the world w.\n\nFrom this we can define the later operator:\n\nlater(T) := {(w, v) | forall w'. R(w, w') => (w', v) in T}\nso for all worlds strictly later than now (so w < w', or the step-index of w is less than w')\nso v has type later(T) when v has type T in all worlds strictly later than now (the world w)\n\nSome stuff can be proven about later,\n\nit's monotone (if U is a subtype of T, then later(U) is a subtype of later(T))\nit distributes over intersection:\n  `later(global_intersection(Ti)) = global_intersection(later(Ti))`\n\nnow the necessity operator (the box), `nec`\n`nec` means now and later, and is defined simply:\n\nnec(T) = T /\\ later(T)\n\nalso monotone,\nforall T, nec(T) subtype of T\nif nec(U) subtype T, then nec(U) subtype nec(T)\nalso distributes over intersection\n\n\nnecessary types\ntypes that, once true in some world w, are true forever.\n\nnecessary(T) = T subtype later(T)\nso if T is true then also later(T)\nor T is a subtype of later(T)\nor T can be used as later(T)\n\nthis won't always be true, since the store evolves from one world to the next, possibly destroying some type\n\nforall T, necessary(nec(T))\nsince nec(T) simply contains lat(T) so we can grab it\n\nforall T, necessary(lat(T))\n\n\nThe lob rule\nsince R is well-founded, this induction principle is true:\n\n```\nlater(T) judges T\n------------------\n    judges T\n```\n\n\n\nRecursive types\n\nI'm not going to go over this in detail.\nBasically, let's say we have a *type* operator F, which maps types to types.\nsuch an operator is contractive if\n\n\n\nA Kripke semantics of stores\n\n\nI think this sentence is what makes the later operator make sense:\n\nIn this definition we write `later(m(l): T)`. There is some value u in memory at address l, and we guarantee to every future world that `u: T`. We don’t need to guarantee `u: T` in the current world because it takes one step just to dereference `m(l)`, and in that step we move to a future world.\nThis use of the later operator rather than the nec operator is crucial in order to solve the cardinality issue. Indeed, for a\nconfiguration ((n, Ψ), v), only the configurations of index\nstrictly less than n are then relevant in the type Ψ(l).\n\nSo basically types can only refer to themselves because the assertions on memory locations only apply to future states.\nAll types can only refer (at least in regards to memory) to worlds later than the current.\n\nthis especially applies to reference types, since by necessity accessing the value requires a step of computation, so `ref T` just means that some location has type T *later*.\n\n\n\n\nOh my god, they say in section 11 that a type T describes *the entire register bank*. it's the type of the whole machine! since reference types are attached to the locations stored in the \"value\" (the register bank), we can assert the state of memory just by the type of the register bank.\nwe can type stack arguments by making assertions about the state of memory around the stack pointer.\n\nA minimal machine could get by with just a program counter and memory, since even the return address can be put in stack arguments in memory\n\n\nThis paper hasn't heard of separation logic or something ha. They keep saying they have to specify that other registers aren't changed. no thanks.\n\n\nThis paper still doesn't explain why *props* have to have step-indexing when they are self-referencing!!\n\n\nI get it all, at least at a high level, but I'm unsatisfied. maybe cpdt will help.\n\n## cpdt because I said so (Universes)\n\n> A predicative system enforces the constraint that, when an object is defined using some sort of quantifier, none of the quantifiers may ever be instantiated with the object itself.\n\nan object can be passed itself as an argument.\nbut what counts as \"itself\"?\nI guess Prop gets around this by not taking *itself*, but *instances* of itself\nokay all he really says is that since Prop is always eliminated at extraction, and therefore doesn't produce infinite regressions allowing infinite loops to prove anything, it doesn't matter if it's impredicative.\n\nso why can't iris use them directly!!!???\n"
  },
  {
    "path": "notes/iris-from-the-ground-up.md",
    "content": "An affine logic seems to only mean that the logic includes the weakening rule: `P * Q -> P`, you can *throw away* knowledge/resources\n\nResources algebras seem to be the important thing.\n\nA resource algebra is a tuple\n\n\n(M, V : M → Prop, |−| : M → M ? , (·) : M × M → M)\n\nrules:\n\nRA-associative: forall a, b, c. (a · b) · c = a · (b · c)\nit doesn't matter what order the composition operator is used in\n\nRA-commutative: forall a, b. a · b = b · a\nit doesn't matter what order the variables are composed in\n\nRA-core-composition-identity: forall a. |a|: M ⇒ |a| · a = a\nif the core of a value is in the type, then composing the core with the same value is the same as the original value\n\nRA-core-idempotent: forall a. |a|: M ⇒ ||a|| = |a|\nif the core of a value is in the type, then the core of the core is the same as the core\n(this also implies the core of the core composed with the original value is the same as the original value)\n\nRA-core-monotonic: forall a, b. |a|: M ∧ a << b ⇒ |b|: M ∧ |a| << |b|\nnot sure yet\n\nM? := M union {False}\nM? basically is just the set of type invariants extended with contradiction\n\nM? · False <-> False · M? <-> M?\ncomposition with False is commutative and identical with M?\n\na << b := exists c: M. b = a · c\na is \"less than\", or \"extended by\" by\nsome c exists that \"fill the gap\" between a and b in terms of composition\n\na --> B := forall c?: M?. V(a · c?) -> exists b: B. V(b · c?)\na --> b := a --> {b}\n\n\na unital resource algebra (uRA) is a resource algebra M with an element ep satisfying these propositions:\n\nV(ep)\nep is valid\n\nforall a: M. ep · a = a\nep can be composed with anything without changing the original thing\n\n|ep| = ep\nthe duplication of ep is the same as ep\n\n\na frame-preserving update is an update from some resource a to some resource b, such that if a is compatible (according to the V function) with all c?: M?, then b is also compatible with all c?\nthis essentially means that you can only update resources to a different state if you both already have valid resources and the updated state will be valid.\n\n\n\nthe core function |−| is basically the *duplication* function. it can be partial when some variants of a type aren't duplicable\n\nThe validity function V: M -> Prop basically defines what variants of the type are valid or acceptable\n\nthe composition function · defines what happens when you combine resources from different threads, or maybe more correctly it's equivalent to the separating conjunction `*` from separation logic\n\n\nghost state view shifts are *consuming*, to update `P ==>_ep Q` you have to update the state, or consume or destroy P. normal propositions `A -> B` are *constructive*, and wands\n\na mask on a hoare triple is like a set or map keeping track of which invariants are in force. accessing an invariant removes that invariant's *namespace* from the mask.\n\n\n\n"
  },
  {
    "path": "notes/iris-lecture-notes.md",
    "content": "https://gitlab.mpi-sws.org/iris/examples/-/tree/master/theories/lecture_notes\n\niris invariants let different threads read/write to the same locations, as long as they don't violate the invariant\niris ghost state lets invariants evolve over time, and keep track of information that doesn't exist in the actual program\n\n# lambda,ref,conc\n\n> A configuration consists of a heap and a thread pool, and a thread pool is a mapping from thread identifiers (natural numbers) to expressions, i.e., a finite set of named threads. Note that reduction of configuations is nondeterministic: we may choose to reduce in any thread in the thread pool. This reflects that we are modelling a kind of preemptive concurrent system.\n\n> In the case of Iris the underlying language of “things” is simple type theory with a number of basic constants. These basic constants are given by the signature S.\n\nThis signature concept is probably going to be important.\n\n\n> The types of Iris are built up from the following grammar, where T stands for additional base types which we will add later, Val and Exp are types of values and expressions in the language, and Prop is the type of Iris propositions.\n\nτ ::= T | Z | Val | Exp | Prop | 1 | τ + τ | τ × τ | τ → τ\n\n1 is basically just shorthand for unit? I guess?\n\n> The judgments take the form Γ |-S t: τ and express when a term t has type τ in context Γ , given signature S. The variable context Γ assigns types to variables of the logic. It is a list of pairs of a variable x and a type τ such that all the variables are distinct. We write contexts in the usual way, e.g., x1: τ 1 , x2: τ 2 is a context.\n\n\n> The magic wand P −∗ Q is akin to the difference of resources in Q and those in P : it is the set of all those resources which when combined with any resource in P are in Q\n\n\n\nThen they go on for a long time discussing pretty obvious rules that I already understand (basic logic, separation logic, basic lambda calculus stuff).\n"
  },
  {
    "path": "notes/jung-thesis.md",
    "content": "<< is an *inclusion relation*. a << b means that b is a \"bigger resource\" than a, or that we obtain b by composing a with some other resource\n"
  },
  {
    "path": "notes/known_types.md",
    "content": "```coq\nInductive typ: Type :=\n  | Unit: typ\n  | Nat: typ\n  | Bool: typ\n  | Arrow: typ -> typ -> typ\n.\n\nFixpoint typeDenote (t: typ): Set :=\n  match t with\n    | Unit => unit\n    | Nat => nat\n    | Bool => bool\n    | Arrow arg ret => typeDenote arg -> typeDenote ret\n  end.\n\n(*Definition typctx := list type.*)\n\nInductive exp: list typ -> typ -> Type :=\n| Const: forall env newtyp (value: typeDenote newtyp), exp env newtyp\n| Var: forall env newtyp, member newtyp env -> exp env newtyp\n| App: forall env arg ret, exp env (Arrow arg ret) -> exp env arg -> exp env ret\n| Abs: forall env arg ret, exp (arg :: env) ret -> exp env (Arrow arg ret).\n\nArguments Const [env].\n\n(*Definition a: exp hlist Bool := Const HNil true.*)\n\nFixpoint expDenote env t (e: exp env t): hlist typeDenote env -> typeDenote t :=\n  match e with\n    | Const _ value => fun _ => tt\n\n    | Var _ _ mem => fun s => hget s mem\n    | App _ _ _ e1 e2 => fun s => (expDenote e1 s) (expDenote e2 s)\n    | Abs _ _ _ e' => fun s => fun x => expDenote e' (HCons x s)\n  end.\n\n(*Eval simpl in expDenote Const HNil.*)\n\n\n\n\n\n\n(*\n  okay I feel like I want to have a `compile` function that takes terms and just reduces the knowns, typechecks them, and outputs a string representing the \"compiled\" program\n  then a `run` function that reduces the knowns and typechecks the program, but then reduces all the terms and outputs the \"stdout\" of the program\n  this is presupposing that you'll have some kind of effectful commands that append some string to the \"stdout\". that seems like the more natural way I would prefer to structure a language that I'll eventually be using to learn while making a real imperative language\n*)\n\n(*Require Import Coq.Strings.String.\nRequire Import theorems.Maps.\n\nInductive typ: Type :=\n  (*| Generic*)\n  | Bool\n  | Nat\n  | Arrow (input output: typ)\n  | UnionNil\n  | UnionCons (arm_name: string) (arm_type: typ) (rest: typ)\n  | TupleNil\n  | TupleCons (left right: typ)\n  (*| KnownType (type_value: trm)*)\n  (*| KnownValue (value: trm)*)\n.\n\nInductive Arm: Type :=\n  | arm (arm_name: string).\n\nInductive trm: Type :=\n  | tru | fls\n  | debug_bool\n  (*| nat_const (n: nat)*)\n  (*| nat_plus (left right: trm)*)\n  (*| debug_nat*)\n  | binding (decl_name: string) (after: trm)\n  | usage (var_name: string)\n  | test (conditional iftru iffls: trm)\n  | fn (args_name: string) (output_type: typ) (body: trm)\n  | call (target_fn args: trm)\n  | union_nil\n  | union_cons (arm_name: string) (arm_value: trm) (rest_type: typ)\n  | union_match (tr: trm) (arms: list (string * trm))\n  | tuple_nil\n  | tuple_cons (left right: trm)\n  | tuple_access (tup: trm) (index: nat)\n.\n\n\nFixpoint tuple_lookup (n: nat) (tr: trm): option trm :=\n  match tr with\n  | tuple_cons t tr' => match n with\n    | 0 => Some t\n    | S n' => tuple_lookup n' tr'\n    end\n  | _ => None\n  end\n.\n\nFixpoint union_lookup (tr: trm) (arms: list (string, (string * trm))): option trm :=\n  match tr with\n  | union_cons tr_arm_name tr_arm_value _ => match arms with\n    | (arm_name, (arm_var, arm_body)) :: arms' => if eqb_string tr_arm_name arm_name\n      then Some (substitute arm_var tr_arm_value arm_body)\n      else union_lookup tr arms'\n    | [] => None\n    end\n  | _ => None\n  end\n.\n*)\n\n\n\n\n\n\n(*Require Import Coq.Strings.String.\nRequire Import theorems.Maps.\n\n(*Notation memarr := (@list string).*)\n\n\nInductive typ: Type :=\n  | Base: string -> typ\n  | Arrow: typ -> typ -> typ\n  | TupleNil: typ\n  | TupleCons: typ -> typ -> typ.\n\n\nInductive trm: Type :=\n  | var: string -> trm\n  | call: trm -> trm -> trm\n  | fn: string -> typ -> trm -> trm\n  (* tuples *)\n  | tuple_proj: trm -> nat -> trm\n  | tuple_nil: trm\n  | tuple_cons: trm -> trm -> trm.\n\n\nInductive tuple_typ: typ -> Prop :=\n  | TTnil:\n    tuple_typ TupleNil\n  | TTcons: forall T1 T2,\n    tuple_typ (TupleCons T1 T2).\n\nInductive well_formed_typ: typ -> Prop :=\n  | wfBase: forall i,\n    well_formed_typ (Base i)\n  | wfArrow: forall T1 T2,\n    well_formed_typ T1 ->\n    well_formed_typ T2 ->\n    well_formed_typ (Arrow T1 T2)\n  | wfTupleNil:\n    well_formed_typ TupleNil\n  | wfTupleCons: forall T1 T2,\n    well_formed_typ T1 ->\n    well_formed_typ T2 ->\n    tuple_typ T2 ->\n    well_formed_typ (TupleCons T1 T2).\n\nHint Constructors tuple_typ well_formed_typ.\n\nInductive tuple_trm: trm -> Prop :=\n  | tuple_tuple_nil:\n    tuple_trm tuple_nil\n  | tuple_trm_tuple_cons: forall t1 t2,\n    tuple_trm (tuple_cons t1 t2).\n\nHint Constructors tuple_trm.\n\n(*Notation \"x :: l\" := (cons x l)\n                     (at level 60, right associativity).*)\nNotation \"{ }\" := tuple_nil.\nNotation \"{ x ; .. ; y }\" := (tuple_cons x .. (tuple_cons y tuple_nil) ..).\n\n\nFixpoint subst (prev: string) (next: trm) (target: trm) : trm :=\n  match target with\n  | var y => if eqb_string prev y then next else target\n  | fn y T t1 => fn y T (if eqb_string prev y then t1 else (subst prev next t1))\n  | call t1 t2 => call (subst prev next t1) (subst prev next t2)\n  | tuple_proj t1 i => tuple_proj (subst prev next t1) i\n  | tuple_nil => tuple_nil\n  | tuple_cons t1 tup => tuple_cons (subst prev next t1) (subst prev next tup)\n  end.\n\nNotation \"'[' prev ':=' next ']' target\" := (subst prev next target) (at level 20).\n\n\nInductive value: trm -> Prop :=\n  | v_fn: forall x T11 t12,\n    value (fn x T11 t12)\n  | v_tuple_nil: value tuple_nil\n  | v_tuple_cons: forall v1 vtup,\n    value v1 ->\n    value vtup ->\n    value (tuple_cons v1 vtup).\n\nHint Constructors value.\n\nFixpoint tuple_lookup (n: nat) (tr: trm): option trm :=\n  match tr with\n  | tuple_cons t tr' => match n with\n    | 0 => Some t\n    | S n' => tuple_lookup n' tr'\n    end\n  | _ => None\n  end.\n\n\nOpen Scope string_scope.\n\nNotation a := (var \"a\").\nNotation b := (var \"b\").\nNotation c := (var \"c\").\nNotation d := (var \"d\").\nNotation e := (var \"e\").\nNotation f := (var \"f\").\nNotation g := (var \"g\").\nNotation l := (var \"l\").\nNotation A := (Base \"A\").\nNotation B := (Base \"B\").\nNotation k := (var \"k\").\nNotation i1 := (var \"i1\").\nNotation i2 := (var \"i2\").\n\n\nExample test_tuple_lookup_nil_0:\n  (tuple_lookup 0 {}) = None.\nProof. reflexivity. Qed.\n\nExample test_tuple_lookup_nil_1:\n  (tuple_lookup 1 {}) = None.\nProof. reflexivity. Qed.\n\nExample test_tuple_lookup_cons_valid_0_a:\n  (tuple_lookup 0 { a }) = Some a.\nProof. reflexivity. Qed.\n\nExample test_tuple_lookup_cons_valid_0_a_b:\n  (tuple_lookup 0 { a; b }) = Some a.\nProof. reflexivity. Qed.\n\nExample test_tuple_lookup_cons_invalid:\n  (tuple_lookup 3 { a; b; c }) = None.\nProof. reflexivity. Qed.\n*)\n\n```\n\n\n\n```\nAdd LoadPath \"/home/blaine/lab/cpdtlib\" as Cpdt.\nSet Implicit Arguments. Set Asymmetric Patterns.\nRequire Import List Cpdt.CpdtTactics Cpdt.DepList theorems.Maps Coq.Strings.String.\n\n(*blaine, you need to write examples of what you'd like to accomplish in the near term*)\n(*some concrete examples of \"metaprogramming\" in some abstract language is all you need*)\n(*you don't have to prove almost anything about them, at least not at first, just get them working as expected and then prove things about them*)\n\n(*the term type you create *is* the meta datatype! syntactic macros are just functions that operate on the same objects as the compiler*)\n\nInductive ty: Type :=\n  | Ty_Bool: ty\n  | Ty_Arrow (domain: ty) (range: ty): ty.\n\nInductive tm: Type :=\n  | tm_var (name: string): tm\n  | tm_call (fn: tm) (arg: tm): tm\n  | tm_fn (argname: string) (argty: ty) (body: tm): tm\n  | tm_true: tm\n  | tm_false: tm\n  | tm_if (test: tm) (tbody: tm) (fbody: tm): tm.\n\nDeclare Custom Entry stlc.\nNotation \"<{ e }>\" := e (e custom stlc at level 99).\nNotation \"( x )\" := x (in custom stlc, x at level 99).\nNotation \"x\" := x (in custom stlc at level 0, x constr at level 0).\nNotation \"U -> T\" := (Ty_Arrow U T) (in custom stlc at level 50, right associativity).\nNotation \"x y\" := (tm_call x y) (in custom stlc at level 1, left associativity).\nNotation \"\\ x : t , y\" := (tm_fn x t y) (\n  in custom stlc at level 90, x at level 99,\n  t custom stlc at level 99,\n  y custom stlc at level 99,\n  left associativity\n).\nCoercion tm_var : string >-> tm.\nNotation \"'Bool'\" := Ty_Bool (in custom stlc at level 0).\nNotation \"'if' x 'then' y 'else' z\" := (tm_if x y z) (\n  in custom stlc at level 89,\n  x custom stlc at level 99,\n  y custom stlc at level 99,\n  z custom stlc at level 99,\n  left associativity\n).\nNotation \"'true'\" := true (at level 1).\nNotation \"'true'\" := tm_true (in custom stlc at level 0).\nNotation \"'false'\" := false (at level 1).\nNotation \"'false'\" := tm_false (in custom stlc at level 0).\n\nDefinition x: string := \"x\".\nDefinition y: string := \"y\".\nDefinition z: string := \"z\".\nHint Unfold x: core.\nHint Unfold y: core.\nHint Unfold z: core.\n\nNotation idB := <{\\x:Bool, x}>.\nNotation idBB := <{\\x:Bool -> Bool, x}>.\n\nInductive value: tm -> Prop :=\n  | v_fn: forall arg T body,\n      value <{\\arg:T, body}>\n  | v_true:\n      value <{true}>\n  | v_false:\n      value <{false}>.\nHint Constructors value: core.\n\n\nReserved Notation \"'[' old ':=' new ']' target\" (in custom stlc at level 20, old constr).\nFixpoint subst (old: string) (new: tm) (target: tm): tm :=\n  match target with\n  | <{true}> => <{true}>\n  | <{false}> => <{false}>\n  | tm_var varname =>\n      if string_dec old varname then new else target\n  | <{\\var:T, body}> =>\n      if string_dec old var then target else <{\\var:T, [old:=new] body}>\n  | <{fn arg}> =>\n      <{([old:=new] fn) ([old:=new] arg)}>\n  | <{if test then tbody else fbody}> =>\n      <{if ([old:=new] test) then ([old:=new] tbody) else ([old:=new] fbody)}>\n  end\n\nwhere \"'[' old ':=' new ']' target\" := (subst old new target) (in custom stlc).\nHint Unfold subst: core.\n\nCheck <{[x:=true] x}>.\nCompute <{[x:=true] x}>.\n\nInductive substi (old: string) (new: tm): tm -> tm -> Prop :=\n  | s_true: substi old new <{true}> <{true}>\n  | s_false: substi old new <{false}> <{false}>\n  | s_var_matches:\n      substi old new (tm_var old) new\n  | s_var_not_matches: forall varname,\n      let varitem := (tm_var varname) in\n      old <> varname -> substi old new varitem varitem\n  | s_fn_matches: forall T body,\n      let fn := <{\\old:T, body}> in\n      substi old new fn fn\n  | s_fn_not_matches: forall var T body newbody,\n      old <> var\n      -> substi old new body newbody\n      -> substi old new <{\\var:T, body}> <{\\var:T, newbody}>\n  | s_fn_call: forall fn newfn arg newarg,\n      substi old new fn newfn\n      -> substi old new arg newarg\n      -> substi old new <{fn arg}> <{newfn newarg}>\n  | s_if: forall test tbody fbody newtest newtbody newfbody,\n      substi old new test newtest\n      -> substi old new tbody newtbody\n      -> substi old new fbody newfbody\n      -> substi old new\n        <{if test then tbody else fbody}>\n        <{if newtest then newtbody else newfbody}>\n.\nHint Constructors substi: core.\n\n(*Theorem substi_correct: forall old new before after,\n  <{ [old:=new]before }> = after <-> substi old new before after.\nProof.\n  intros. split; generalize after.\n  induction before; if_crush.\n  induction 1; if_crush.\nQed.*)\n\n\nReserved Notation \"t '-->' t'\" (at level 40).\nInductive step: tm -> tm -> Prop :=\n  | ST_AppAbs: forall x T2 t1 v2,\n      value v2\n      -> <{(\\x:T2, t1) v2}> --> <{ [x:=v2]t1 }>\n  | ST_App1: forall t1 t1' t2,\n      t1 --> t1' ->\n      <{t1 t2}> --> <{t1' t2}>\n  | ST_App2: forall v1 t2 t2',\n      value v1\n      -> t2 --> t2'\n      -> <{ v1 t2}> --> <{v1 t2'}>\n  | ST_IfTrue: forall t1 t2,\n      <{if true then t1 else t2}> --> t1\n  | ST_IfFalse: forall t1 t2,\n      <{if false then t1 else t2}> --> t2\n  | ST_If: forall t1 t1' t2 t3,\n      t1 --> t1'\n      -> <{ if t1 then t2 else t3}> --> <{if t1' then t2 else t3}>\n\nwhere \"t '-->' t'\" := (step t t').\n\nDefinition relation (X: Type) := X -> X -> Prop.\nInductive multi {X: Type} (R: relation X): relation X :=\n  | multi_refl: forall (x: X), multi R x x\n  | multi_step: forall (x y z: X),\n      R x y\n      -> multi R y z\n      -> multi R x z.\n\nHint Constructors step: core.\nNotation multistep := (multi step).\nNotation \"t1 '-->*' t2\" := (multistep t1 t2) (at level 40).\n\nTactic Notation \"print_goal\" :=\n  match goal with |- ?x => idtac x end.\nTactic Notation \"normalize\" :=\n  repeat (\n    print_goal; eapply multi_step;\n    [ (eauto 10; fail) | (instantiate; simpl)]\n  );\n  apply multi_refl.\n\nLemma step_example1':\n  <{idBB idB}> -->* idB.\nProof. normalize. Qed.\n\nDefinition context := partial_map ty.\n\nInductive typed: context -> tm -> ty -> Prop :=\n  | T_True: forall ctx, typed ctx <{true}> <{Bool}>\n  | T_False: forall ctx, typed ctx <{false}> <{Bool}>\n  | T_Var: forall ctx varname T,\n      ctx varname = Some T ->\n      typed ctx varname T\n  | T_Abs: forall ctx var Tvar body Tbody,\n      typed (update ctx var Tvar) body Tbody ->\n      typed ctx <{\\var:Tvar, body}> <{Tvar -> Tbody}>\n  | T_App: forall ctx fn arg domain range,\n      typed ctx fn <{domain -> range}> ->\n      typed ctx arg domain ->\n      typed ctx <{fn arg}> range\n  | T_If: forall test tbody fbody T ctx,\n       typed ctx test <{Bool}> ->\n       typed ctx tbody T ->\n       typed ctx fbody T ->\n       typed ctx <{if test then tbody else fbody}> T\n.\nHint Constructors typed: core.\n\nExample typing_example_1:\n  typed empty <{\\x:Bool, x}> <{Bool -> Bool}>.\nProof. auto. Qed.\n\n\nFixpoint types_equal (T1 T2: ty): {T1 = T2} + {T1 <> T2}.\n  decide equality.\nDefined.\n\n\nNotation \"x <- e1 -- e2\" := (match e1 with | Some x => e2 | None => None end)\n  (right associativity, at level 60).\n\nFixpoint type_check (ctx: context) (t: tm): option ty :=\n  match t with\n  | <{true}> => Some <{ Bool }>\n  | <{false}> => Some <{ Bool }>\n  | tm_var varname => ctx varname\n  | <{\\var:Tvar, body}> =>\n      Tbody <- type_check (update ctx var Tvar) body --\n      Some <{Tvar -> Tbody}>\n  | <{fn arg}> =>\n      Tfn <- type_check ctx fn --\n      Targ <- type_check ctx arg --\n      match Tfn with\n      | <{Tdomain -> Trange}> =>\n          if types_equal Tdomain Targ then Some Trange else None\n      | _ => None\n      end\n  | <{if test then tbody else fbody}> =>\n      Ttest <- type_check ctx test --\n      Ttbody <- type_check ctx tbody --\n      Tfbody <- type_check ctx fbody --\n      match Ttest with\n      | <{ Bool }> =>\n          if types_equal Ttbody Tfbody then Some Ttbody else None\n      | _ => None\n      end\n  end.\nHint Unfold type_check.\n\nLtac solve_by_inverts n :=\n  match goal with | H : ?T |- _ =>\n  match type of T with Prop =>\n    solve [\n      inversion H;\n      match n with S (S (?n')) => subst; solve_by_inverts (S n') end ]\n  end end.\n\nLtac solve_by_invert :=\n  solve_by_inverts 1.\n\nLtac if_crush :=\n  crush; repeat match goal with\n    | [ |- context[if ?X then _ else _] ] => destruct X\n  end; crush.\n\nTheorem type_checking_complete: forall ctx t T,\n  typed ctx t T -> type_check ctx t = Some T.\nProof.\n  intros. induction H; if_crush.\nQed.\nHint Resolve type_checking_complete: core.\n\nTheorem type_checking_sound: forall ctx t T,\n  type_check ctx t = Some T -> typed ctx t T.\nProof.\n  intros ctx t. generalize dependent ctx.\n  induction t; intros ctx T; inversion 1; crush.\n  - rename t1 into fn, t2 into arg.\n    remember (type_check ctx fn) as Fnchk.\n    destruct Fnchk as [TFn|]; try solve_by_invert;\n    destruct TFn as [|Tdomain Trange]; try solve_by_invert;\n    remember (type_check ctx arg) as Argchk;\n    destruct Argchk as [TArg|]; try solve_by_invert.\n    destruct (types_equal Tdomain TArg) eqn: Hd; crush.\n    apply T_App.\n  -\n  -\n\n  intros. generalize dependent T. generalize dependent ctx.\n  induction t; intros ctx T; inversion 1.\n  - crush.\n  -\n    crush.\n  induction t; intros crush.\nQed.\nHint Resolve type_checking_sound.\n\n\nTheorem type_checking_correct: forall ctx t T,\n  type_check ctx t = Some T <-> typed ctx t T.\nProof. crush. Qed.\n\n```\n\n\n\n\n\nYou should probably write out this whole (almost) blog post informally before you really dig into the formal stuff. This is just such a huge undertaking, first understanding what you even precisely want to accomplish is a good idea.\n\nThink of it like writing the documentation before you write the code! You do that all the time since it helps clarify what's special and useful about the code, and what features it needs to have.\n\n\n\n\n\n\n\n\n\n\n\n\nSo I guess this whole project has a few beliefs:\n\n- We can and should bring formally verified programming with dependent types to the mainstream.\n- We can and should make a bedrock language with a dependent type system that is defined in the smallest and most primitive constructs of machine computation, because all the code we actually write is intended for such systems.\n- We should design some set of \"known\" combinators to allow someone to write a compiler in bedrock that translates some set of terms of a language into bedrock, so that arbitrarily convenient and powerful languages can be implemented from these bedrock building blocks. By doing so we can have all languages be truly safe and also truly interoperable. Formalizing and implementing the algorithms for a type system in bedrock allows you to prove that all of your derived forms are valid in bedrock! Dependent types and the ability to prove arbitrary statements is *most* powerful at this lowest level of abstraction, since it allows us to build literally any language construct we can imagine, since the derived types people build can encapsulate on bytes and propositions, which are the most flexible constructs for machine computation.\n\n\n\n\n\n\n\n\nSo far you've considered \"generics\" as something that exists in the \"computable\" set of terms, but that's not really correct\na generic function is actually two function calls, the first of a \"known\" function that takes some function containing type variables and a type substitution mapping those type variables to concrete types (or to other type variables! which can allow you to partially apply generics, there should probably be two functions at least for now, one that expects all type variables to be resolved and returns a concrete function, and one that allows for partial application and returns a known function. both of these functions can resolve to either their intended type or a compilation error term)\n\n\nso you should probably have these inductives: concrete types (which include the types that encode type variables in a \"computable\" way. there's some thinking to do here, but I think this means that you can pass any concrete term to a known function as long as it meets some \"known\" criteria which for functions is assumed and but for other values simply means that they have to be constants) and concrete terms (basically just the base lambda calculus stuff), known types and known terms (which are the \"inductive\" step, since they can take both concrete things as well as other knowns, creating the unbounded but finite dag of compilation)\n\nall of this means that bedrock itself won't actually have \"primitive syntactic\" generics like other languages do, but syntactic generics will of course be possible by means of translation in any theoretical derived language.\n\n\n\n\nIt is actually possible to have \"dynamic\" functions! By the time bedrock is done, *everything* will just be bytes, and *instructions* are just bytes! All you need in order to allow dynamic functions is to \"include\" the typechecker or compiler in your final \"computable\" binary! All we've done here is \"move up\" the known steps, since what is typically known and performed at compile time is still \"dynamic\" in the sense that actual machine computation is being performed, just like it will be at runtime! compile time is just a special case of runtime!\n\n\n\n\n\n\n\nKnown types are simply all about how we're able to produce code.\n\nOne of the first things we need is a \"bedrock type\". This is the actual\n\nIf we implement this as a simply typed lambda calculus, then the \"ordering\" of everything is taken care of?\nIt's also less interesting, but that's okay, at least for now.\n\nReally this first version to validate everything is basically just a simply typed lambda calculus but where there's some kind of \"known\" system that allows the functions to operate on types.\n\n\nYou need to sit and draw out how different types relate to each other.\n\nThen you basically do all the work he does in SLTC. Define preservation and progress and all that.\n\n\n\n\n\nFirst you have \"computable terms\". These are basically just terms that have been reduced enough that they can actually be \"run\", whatever that means in the context you're talking about. In a \"compiled\" language that means something that's been reduced enough to be output as llvm and run. In these more theoretical contexts it's just reduced down to a subset of terms that have been deemed computable.\n\nThe interesting part of the \"computable term\" definition is what terms it reveals as *not* being computable. These are basically all the \"known\" structures. Those known structures need to be reduced all the way to computable ones before they're ready to actually compute. But the *bodies* of the known structures *themselves* also need to be reduced as well! This produces a directed acyclic graph of \"known\" terms that need to be reduced in order all the way down to computable terms.\n\n\nDoes this mean that the only \"types\" we actually *need* are computable ones? It certainly seems that way, since we can simply say that the only thing we need to \"typecheck\" is a computable term that we're about to compute. Having more \"advanced\" higher order types is merely useful for a more ergonomic version of the language that we can do a \"higher order\" typecheck on before even bothering to reduce any terms. Higher order typechecks probably also play right into a full proof-capable language, one where you can prove that your higher order functions will always reduce to things that will typecheck.\n\nFor now it seems all this version needs is an initial \"dag\" check, if it even allows recursion that is.\n\n\nDoes this mean that the typing relation is something like this?\n\n```v\nInductive ty : Type :=\n  | Bool: ty\n  | Arrow: ty -> ty -> ty\n  | Known: ty -> ty.\n\nI think this really is it! At least for formally defining it, all this \"Known\" type needs to do to work is to \"reduce\" in a different way. It yields an abstract description of the type or value or whatever rather than another term. Or rather the term it reduces to *is* the type.\n\nIs this true? I need to keep thinking.\n\nInductive tm : Type :=\n  | var : string -> tm\n  | call : tm -> tm -> tm\n  | fn : string -> ty -> tm -> tm\n  | tru : tm\n  | fls : tm\n  | test : tm -> tm -> tm -> tm.\n```\n\n\n\n\nmaybe we define types not inherently, but as things that reduce from known terms?\nor maybe our typechecking function and relation aren't total, we can't (and don't want to bother to) typecheck terms that haven't reduced all the way to computable terms. the typechecking function should return `option` on all terms that aren't computable\n\n\n\n\n\n\n\nSo let's say we had a language that had these types\n\nbool: typ; obvious, computable\nnat: typ; obvious, computable\narrow: typ -> typ -> typ; obvious, computable\ntypvalue: booltyp | nattyp | arrowtyp; hmmmm, this is computable since we need to compute based on it to progress and output something\nneed union (variant) and tuple and unit\nknown: (tm -> tm) -> typ?; not computable directly, but we can reduce it to being computable\n\nand these terms:\n\ntru: tm; obvious, computable\nfls: tm; obvious, computable\nn: nat -> tm; obvious, computable\nknown\n\n\n\n\n\n\nWhile reading types and programming languages, something's occuring to me.\n\nThe base \"bedrock\" language has to be fully strict and exact in the way it defines the calculable language, which can basically only consist of arrays of bytes and propositions on those arrays of bytes.\n\nHowever once we've done that, we can build all kinds of convenient language forms and theorems about them by simply defining them as meta-functions in that bedrock language.\n\nFor example, in the strict \"bedrock\" sense, subtyping is basically never valid, since subtyping ignores the very concrete byte-level representation of the structures. But if we have a \"meta-language\" (which is just a \"compiler\" that itself is a program in bedrock that takes the terms of the meta-language and computes them to bedrock) then we can allow subtyping simply by saying that whenever we encounter an action that gives a subtype, we can compile that action into the actually valid bytes level action that will satisfy the propositions of bedrock. In this way we have a *provably correct* desugaring process.\n"
  },
  {
    "path": "notes/pony-reference-capabilities.md",
    "content": "http://jtfmumm.com/blog/2016/03/06/safely-sharing-data-pony-reference-capabilities/\n\n\n`iso`: writeable/readable, only one reference exists (this one). can be used to read or write locally. can be converted to anything, including giving it up to pass to another actor\n`val`: readable, only immutable aliases exist, so can be shared for reading with anyone.\n`tag`: neither, the address of an actor, can be shared anywhere, but can't be read or written.\n`ref`: writeable/readable but only locally, an unknown number of mutable local aliases exist, so this is just like a typical alias. since we don't know how many aliases exist, we can only possibly share this thing if we somehow destroy those other aliases.\n`trn`: a local reference we can write/read, but can only create readable references from. this allows us to eventually convert this type to a `val`.\n`box`: readable locally, we don't know how many other people are looking at this thing\n\nthe subtyping (or \"can be substituted for\") relation\n\n```\n               --> ref --\n              /          \\\niso --> trn --            --> box --> tag\n              \\          /\n               --> val --\n```\n\n\n1) A mutable reference capability denies neither read nor write permissions. This category includes `iso`, `ref`, and `trn`.\n\n2) An immutable reference capability denies write permissions but not read permissions. This category includes `val` and `box`.\n\n3) An opaque reference capability denies both read and write permissions. The only example is `tag`.\n\n\n\nhttps://tutorial.ponylang.io/reference-capabilities/reference-capabilities.html#isolated-data-may-be-complex\n\n```\nIsolated data may be complex\nAn isolated piece of data may be a single byte. But it can also be a large data structure with multiple references between the various objects in that structure. What matters for the data to be isolated is that there is only a single reference to that structure as a whole. We talk about the isolation boundary of a data structure. For the structure to be isolated:\n\nThere must only be a single reference outside the boundary that points to an object inside.\nThere can be any number of references inside the boundary, but none of them must point to an object outside.\n\n\n\nIsolated, written iso. This is for references to isolated data structures. If you have an iso variable then you know that there are no other variables that can access that data. So you can change it however you like and give it to another actor.\n\nValue, written val. This is for references to immutable data structures. If you have a val variable then you know that no-one can change the data. So you can read it and share it with other actors.\n\nReference, written ref. This is for references to mutable data structures that are not isolated, in other words, “normal” data. If you have a ref variable then you can read and write the data however you like and you can have multiple variables that can access the same data. But you can’t share it with other actors.\n\nBox. This is for references to data that is read-only to you. That data might be immutable and shared with other actors or there may be other variables using it in your actor that can change the data. Either way, the box variable can be used to safely read the data. This may sound a little pointless, but it allows you to write code that can work for both val and ref variables, as long as it doesn’t write to the object.\n\nTransition, written trn. This is used for data structures that you want to write to, while also holding read-only (box) variables for them. You can also convert the trn variable to a val variable later if you wish, which stops anyone from changing the data and allows it be shared with other actors.\n\nTag. This is for references used only for identification. You cannot read or write data using a tag variable. But you can store and compare tags to check object identity and share tag variables with other actors.\n\nNote that if you have a variable referring to an actor then you can send messages to that actor regardless of what reference capability that variable has.\n```\n\n\n\nso reference capabilities have these qualities:\nreadable/writeable *to you*\nreadable/writeable *to others*\nwriteable *locally*\nshareable *locally*\nshareable *globally*\n\n\nhttps://tutorial.ponylang.io/reference-capabilities/guarantees.html\n\n\n`iso`: others/local read/write unique\n`trn`: others/local write unique, others read unique\n`ref`: others read/write unique\n\n`val`: others/local immutable\n`box`: others immutable\n\n`tag`: opaque\n\n|                       | Deny global read/write   | Deny global write | None denied      |\n|-----------------------|--------------------------|-------------------|------------------|\n| Deny local read/write | `iso` (sendable)         |                   |                  |\n| Deny local write      | `trn`                    | `val` (sendable)  |                  |\n| None denied           | `ref`                    | `box`             | `tag` (sendable) |\n|                       | (Mutable)                | (Immutable)       | (Opaque)         |\n\nSendable capabilities. If we want to send references to a different actor, we must make sure that the global and local aliases make the same guarantees. It’d be unsafe to send a trn to another actor, since we could possibly hold box references locally. Only iso, val, and tag have the same global and local restrictions – all of which are in the main diagonal of the matrix.\n"
  },
  {
    "path": "notes/tarjan/README.md",
    "content": "Tarjan and Kosaraju\n-------------------\n\n# Main files\n\n## Proofs of Tarjan strongly connected component algorithm (independent from each other)\n* `tarjan_rank.v` *(751 sloc)*: proof with rank\n* `tarjan_rank_bigmin.v` *(806 sloc)*: same proof but with a `\\min_` instead of multiple inequalities on the output rank\n* `tarjan_num.v` *(1029 sloc)*: same proof as `tarjan_rank_bigmin.v` but with serial numbers instead of ranks\n* `tarjan_nocolor.v` *(548 sloc)*: new proof, with ranks and without colors, less fields in environement and less invariants, preconditions and postconditions.\n* `tarjan_nocolor_optim.v` *(560 sloc)*: same proof as `tarjan_nocolor.v`, but with the serial number field of the environement restored, and passing around stack extensions as sets.\n\n## Proof of Kosaraju strongly connected component algorithm\n* `Kosaraju.v` *(679 sloc)*: proof of Kosaraju connected component algorithm\n\n## Extra library files\n* `bigmin.v` *(137 sloc)*: extra library to deal with \\min(i in A) F i\n* `extra.v` *(265 sloc)*: naive definitions of strongly connected components and various basic extentions of mathcomp libraries on paths and fintypes.\n\n# Authors:\n\nCyril Cohen, Jean-Jacques Lévy and Laurent Théry\n"
  },
  {
    "path": "notes/tarjan/_CoqProject",
    "content": "-R . mathcomp.tarjan\n-arg -w -arg -notation-overridden\n\ntarjan_nocolors.v\nextra_nocolors.v\n\n"
  },
  {
    "path": "notes/tarjan/extra_nocolors.v",
    "content": "From mathcomp Require Import all_ssreflect.\n\nSet Implicit Arguments.\nUnset Strict Implicit.\nUnset Printing Implicit Defensive.\n\nLemma ord_minn_le n (i j : 'I_n) : minn i j < n.\nProof. by rewrite gtn_min ltn_ord. Qed.\nDefinition ord_minn {n} (i j : 'I_n) := Ordinal (ord_minn_le i j).\n\nSection ord_min.\nVariable (n : nat).\nNotation T := (ord_max : 'I_n.+1).\nNotation min := (@ord_minn n.+1).\n\nLemma minTo : left_id T min.\nProof. by move=> i; apply/val_inj; rewrite /= (minn_idPr _) ?leq_ord. Qed.\n\nLemma minoT : right_id T min.\nProof. by move=> i; apply/val_inj; rewrite /= (minn_idPl _) ?leq_ord. Qed.\n\nLemma minoA : associative min.\nProof. by move=> ???; apply/val_inj/minnA. Qed.\n\nLemma minoC : commutative min.\nProof. by move=> ??; apply/val_inj/minnC. Qed.\n\nCanonical ord_minn_monoid := Monoid.Law minoA minTo minoT.\nCanonical ord_minn_comoid := Monoid.ComLaw minoC.\n\nEnd ord_min.\n\nNotation \"\\min_ ( i | P ) F\" := (\\big[ord_minn/ord_max]_(i | P%B) F%N)\n  (at level 41, F at level 41, i at level 50,\n   format \"'[' \\min_ ( i  |  P ) '/  '  F ']'\") : nat_scope.\nNotation \"\\min_ i F\" := (\\big[ord_minn/ord_max]_i F%N) \n  (at level 41, F at level 41, i at level 0,\n   format \"'[' \\min_ i '/  '  F ']'\") : nat_scope.\nNotation \"\\min_ ( i 'in' A | P ) F\" :=\n (\\big[ord_minn/ord_max]_(i in A | P%B) F%N)\n  (at level 41, F at level 41, i, A at level 50,\n   format \"'[' \\min_ ( i  'in'  A  |  P ) '/  '  F ']'\") : nat_scope.\nNotation \"\\min_ ( i 'in' A ) F\" :=\n (\\big[ord_minn/ord_max]_(i in A) F%N)\n  (at level 41, F at level 41, i, A at level 50,\n   format \"'[' \\min_ ( i  'in'  A ) '/  '  F ']'\") : nat_scope.\n\nSection extra_bigmin.\n\nVariables (n : nat) (I : finType).\nImplicit Type (F : I -> 'I_n.+1).\n\nLemma geq_bigmin_cond (P : pred I) F i0 :\n  P i0 -> F i0 >= \\min_(i | P i) F i.\nProof. by move=> Pi0; rewrite (bigD1 i0) //= geq_minl. Qed.\nArguments geq_bigmin_cond [P F].\n\nLemma geq_bigmin F (i0 : I) : F i0 >= \\min_i F i.\nProof. exact: geq_bigmin_cond. Qed.\n\nLemma bigmin_geqP (P : pred I) (m : 'I_n.+1) F :\n  reflect (forall i, P i -> F i >= m) (\\min_(i | P i) F i >= m).\nProof.\napply: (iffP idP) => leFm => [i Pi|].\n  by apply: leq_trans leFm _; apply: geq_bigmin_cond.\nby elim/big_ind: _; rewrite ?leq_ord // => m1 m2; rewrite leq_min => ->.\nQed.\n\nLemma bigmin_inf i0 (P : pred I) (m : 'I_n.+1) F :\n  P i0 -> m >= F i0 -> m >= \\min_(i | P i) F i.\nProof.\nby move=> Pi0 le_m_Fi0; apply: leq_trans (geq_bigmin_cond i0 Pi0) _.\nQed.\n\nLemma bigmin_eq_arg i0 (P : pred I) F :\n  P i0 -> \\min_(i | P i) F i = F [arg min_(i < i0 | P i) F i].\nProof.\nmove=> Pi0; case: arg_minP => //= i Pi minFi.\nby apply/val_inj/eqP; rewrite eqn_leq geq_bigmin_cond //=; apply/bigmin_geqP.\nQed.\n\nLemma eq_bigmin_cond (A : pred I) F :\n  #|A| > 0 -> {i0 | i0 \\in A & \\min_(i in A) F i = F i0}.\nProof.\ncase: (pickP A) => [i0 Ai0 _ | ]; last by move/eq_card0->.\nby exists [arg min_(i < i0 in A) F i]; [case: arg_minP | apply: bigmin_eq_arg].\nQed.\n\nLemma eq_bigmin F : #|I| > 0 -> {i0 : I | \\min_i F i = F i0}.\nProof. by case/(eq_bigmin_cond F) => x _ ->; exists x. Qed.\n\nLemma bigmin_setU (A B : {set I}) F :\n  \\min_(i in (A :|: B)) F i =\n  ord_minn (\\min_(i in A) F i) (\\min_(i in B) F i).\nProof.\nhave d : [disjoint A :\\: B & B] by rewrite -setI_eq0 setIDAC setDIl setDv setI0.\nrewrite (eq_bigl [predU (A :\\: B) & B]) ?bigU//=; last first.\n  by move=> y; rewrite !inE; case: (_ \\in _) (_ \\in _) => [] [].\nsymmetry; rewrite (big_setID B) /= [X in ord_minn X _]minoC -minoA.\ncongr (ord_minn _ _); apply: val_inj; rewrite /= (minn_idPr _)//.\nby apply/bigmin_geqP=> i; rewrite inE => /andP[iA iB]; rewrite (bigmin_inf iB).\nQed.\n\nEnd extra_bigmin.\n\nArguments geq_bigmin_cond [n I P F].\nArguments geq_bigmin [n I F].\nArguments bigmin_geqP [n I P m F].\nArguments bigmin_inf [n I] i0 [P m F].\nArguments bigmin_eq_arg [n I] i0 [P F].\n\nSection extra_fintype.\n\nVariable V : finType.\n\nDefinition relto (a : pred V) (g : rel V) := [rel x y | (y \\in a) && g x y].\nDefinition relfrom (a : pred V) (g : rel V) := [rel x y | (x \\in a) && g x y].\n\nLemma connect_rev (g : rel V) :\n  connect g =2 (fun x => connect (fun x => g^~ x) ^~ x).\nProof.\nmove=> x y; apply/connectP/connectP=> [] [p gp ->];\n[exists (rev (belast x p))|exists (rev (belast y p))]; rewrite ?rev_path //;\nby case: (lastP p) => //= ??; rewrite belast_rcons rev_cons last_rcons.\nQed.\n\nLemma path_to a g z p : path (relto a g) z p = (path g z p) && (all a p).\nProof.\napply/(pathP z)/idP => [fgi|/andP[/pathP gi] /allP ga]; last first.\n  by move=> i i_lt /=; rewrite gi ?andbT ?[_ \\in _]ga // mem_nth.\nrewrite (appP (pathP z) idP) //=; last by move=> i /fgi /= /andP[_ ->].\nby apply/(all_nthP z) => i /fgi /andP [].\nQed.\n\nLemma path_from a g z p :\n  path (relfrom a g) z p = (path g z p) && (all a (belast z p)).\nProof. by rewrite -rev_path path_to all_rev rev_path. Qed.\n\n\nLemma connect_to (a : pred V) (g : rel V) x z : connect g x z ->\n  exists y, [/\\ (y \\in a) ==> (x == y) && (x \\in a),\n                 connect g x y & connect (relto a g) y z].\nProof.\nmove=> /connectP [p gxp ->].\npose P := [pred i | let y := nth x (x :: p) i in\n  [&& connect g x y & connect (relto a g) y (last x p)]].\nhave [] := @ex_minnP P.\n  by exists (size p); rewrite /= nth_last (path_connect gxp) //= mem_last.\nmove=> i /= /andP[g1 g2] i_min; exists (nth x (x :: p) i); split=> //.\ncase: i => [|i] //= in g1 g2 i_min *; first by rewrite eqxx /= implybb.\nhave i_lt : i < size p.\n  by rewrite i_min // !nth_last /= (path_connect gxp) //= mem_last.\nhave [<-/=|neq_xpi /=] := altP eqP; first by rewrite implybb.\nhave := i_min i; rewrite ltnn => /contraNF /(_ isT) <-; apply/implyP=> axpi.\nrewrite (connect_trans _ g2) ?andbT //; last first.\n  by rewrite connect1 //= [_ \\in _]axpi /= (pathP x _).\nby rewrite (path_connect gxp) //= mem_nth //= ltnW.\nQed.\n\nLemma connect_from (a : pred V) (g : rel V) x z : connect g x z ->\n  exists y, [/\\ (y \\in a) ==> (z == y) && (z \\in a),\n                connect (relfrom a g) x y & connect g y z].\nProof.\nrewrite connect_rev => cgxz; have [y [ayaz]]//= := connect_to a cgxz.\nby exists y; split; rewrite // connect_rev.\nQed.\n\nLemma connect1l (g : rel V) x z :\n  connect g x z -> z != x -> exists2 y, g x y & connect g y z.\nProof.\nmove=> /connectP [[|y p] //= xyp ->]; first by rewrite eqxx.\nby move: xyp=> /andP[]; exists y => //; apply/connectP; exists p.\nQed.\n\nLemma connect1r (g : rel V) x z :\n  connect g x z -> z != x -> exists2 y, connect g x y & g y z.\nProof.\nmove=> xz zNx; move: xz; rewrite connect_rev => /connect1l.\nby rewrite eq_sym => /(_ zNx) [y]; exists y; rewrite // connect_rev.\nQed.\n\nSection connected.\n\nVariable (g : rel V).\n\nDefinition connected := forall x y, connect g x y.\n\nLemma cover1U (A : {set V}) P : cover (A |: P) = A :|: cover P.\nProof. by apply/setP => x; rewrite /cover bigcup_setU big_set1. Qed.\n\nLemma connectedU (A B : {set V}) : {in A &, connected} -> {in B &, connected} ->\n  {in A & B, connected} -> {in B & A, connected} -> {in A :|: B &, connected}.\nProof.\nmove=> cA cB cAB cBA z t; rewrite !inE => /orP[zA|zB] /orP[tA|tB];\nby[apply: cA|apply: cB|apply: cAB|apply: cBA].\nQed.\n\nEnd connected.\n\nSection Symconnect.\n\nVariable r : rel V.\n\n(* x is symconnected to y *)\nDefinition symconnect x y := connect r x y && connect r y x.\n\nLemma symconnect0 : reflexive symconnect.\nProof. by move=> x; apply/andP. Qed.\n\nLemma symconnect_sym : symmetric symconnect.\nProof. by move=> x y; apply/andP/andP=> [] []. Qed.\n\nLemma symconnect_trans : transitive symconnect.\nProof.\nmove=> x y z /andP[Cyx Cxy] /andP[Cxz Czx].\nby rewrite /symconnect (connect_trans Cyx) ?(connect_trans Czx).\nQed.\nHint Resolve symconnect0 symconnect_sym symconnect_trans.\n\nLemma symconnect_equiv : equivalence_rel symconnect.\nProof. by apply/equivalence_relP; split; last apply/sym_left_transitive. Qed.\n\n(*************************************************)\n(* Connected components of the graph, abstractly *)\n(*************************************************)\n\nDefinition sccs := equivalence_partition symconnect setT.\n\nLemma sccs_partition : partition sccs setT.\nProof. by apply: equivalence_partitionP => ?*; apply: symconnect_equiv. Qed.\n\nDefinition cover_sccs := cover_partition sccs_partition.\n\nLemma trivIset_sccs : trivIset sccs.\nProof. by case/and3P: sccs_partition. Qed.\nHint Resolve trivIset_sccs.\n\nNotation scc_of := (pblock sccs).\n\nLemma mem_scc x y : x \\in scc_of y = symconnect y x.\nProof.\nby rewrite pblock_equivalence_partition // => ?*; apply: symconnect_equiv.\nQed.\n\nDefinition def_scc scc x := @def_pblock _ _ scc x trivIset_sccs.\n\nDefinition is_subscc (A : {set V}) := A != set0 /\\\n                                      {in A &, forall x y, connect r x y}.\n\nLemma is_subscc_in_scc (A : {set V}) :\n  is_subscc A -> exists2 scc, scc \\in sccs & A \\subset scc.\nProof.\nmove=> []; have [->|[x xA]] := set_0Vmem A; first by rewrite eqxx.\nmove=> AN0 A_sub; exists (scc_of x); first by rewrite pblock_mem ?cover_sccs.\nby apply/subsetP => y yA; rewrite mem_scc /symconnect !A_sub.\nQed.\n\nLemma is_subscc1 x (A : {set V}) : x \\in A ->\n  (forall y, y \\in A -> connect r x y /\\ connect r y x) -> is_subscc A.\nProof.\nmove=> xA AP; split; first by apply: contraTneq xA => ->; rewrite inE.\nby move=> y z /AP [xy yx] /AP [xz zx]; rewrite (connect_trans yx).\nQed.\n\nEnd Symconnect.\n\nLemma setUD (B A C : {set V}) : B \\subset A -> C \\subset B -> \n  (A :\\: B) :|: (B :\\: C) = (A :\\: C).\nProof.\nmove=> subBA subCB; apply/setP=> x; rewrite !inE.\nhave /implyP  := subsetP subBA x; have /implyP  := subsetP subCB x.\nby do !case: (_ \\in _).\nQed.\n\nLemma setUDl (T : finType) (A B : {set T}) : A :|: B :\\: A = A :|: B.\nProof. by apply/setP=> x; rewrite !inE; do !case: (_ \\in _). Qed.\n\nLemma subset_cover (sccs sccs' : {set {set V}}) :\n  sccs \\subset sccs' -> cover sccs \\subset cover sccs'.\nProof.\nmove=> /subsetP subsccs; apply/subsetP=> x /bigcupP [scc /subsccs].\nby move=> scc' x_in; apply/bigcupP; exists scc.\nQed.\n\nLemma disjoint1s (A : pred V) (x : V) : [disjoint [set x] & A] = (x \\notin A).\nProof.\napply/pred0P/idP=> [/(_ x)/=|]; first by rewrite inE eqxx /= => ->.\nby move=> xNA y; rewrite !inE; case: eqP => //= ->; apply/negbTE.\nQed.\n\nLemma disjoints1 (A : pred V) (x : V) : [disjoint A & [set x]] = (x \\notin A).\nProof. by rewrite disjoint_sym disjoint1s. Qed.\n\nEnd extra_fintype.\n"
  },
  {
    "path": "notes/tarjan/tarjan_nocolors.v",
    "content": "From mathcomp Require Import all_ssreflect.\nRequire Import extra_nocolors.\n\nSet Implicit Arguments.\nUnset Strict Implicit.\nUnset Printing Implicit Defensive.\n\nSection tarjan.\n\nVariable (V : finType) (successor_seq : V -> seq V).\nNotation successors x := [set y in successor_seq x].\nNotation infty := #|V|.\n\n(*************************************************************)\n(*               Tarjan 72 algorithm,                        *)\n(* rewritten in a functional style  with extra modifications *)\n(*************************************************************)\n\nRecord env := Env {esccs : {set {set V}}; num: {ffun V -> nat}}.\n\nDefinition visited e := [set x | num e x <= infty].\nNotation sn e := #|visited e|.\nDefinition stack e := [set x | num e x < sn e].\n\nDefinition visit x e :=\n  Env (esccs e) (finfun [eta num e with x |-> sn e]).\nDefinition store scc e :=\n  Env (scc |: esccs e) [ffun x => if x \\in scc then infty else num e x].\n\nDefinition dfs1 dfs x e :=\n    let: (n1, e1) as res := dfs (successors x) (visit x e) in\n    if n1 < sn e then res else (infty, store (stack e1 :\\: stack e) e1).\n\nDefinition dfs dfs1 dfs (roots : {set V}) e :=\n  if [pick x in roots] isn't Some x then (infty, e) else\n  let: (n1, e1) := if num e x <= infty then (num e x, e) else dfs1 x e in\n  let: (n2, e2) := dfs (roots :\\ x) e1 in (minn n1 n2, e2).\n\nFixpoint rec k r e :=\n  if k is k.+1 then dfs (dfs1 (rec k)) (rec k) r e\n  else (infty, e).\n\nDefinition e0 := (Env set0 [ffun _ => infty.+1]).\nDefinition tarjan := let: (_, e) := rec (infty * infty.+2) setT e0 in esccs e.\n\n(*****************)\n(* Abbreviations *)\n(*****************)\n\nNotation edge := (grel successor_seq).\nNotation gconnect := (connect edge).\nNotation gsymconnect := (symconnect edge).\nNotation gsccs := (sccs edge).\nNotation gscc_of := (pblock gsccs).\nNotation gconnected := (connected edge).\nNotation new_stack e1 e2 := (stack e2 :\\: stack e1).\nNotation new_visited e1 e2 := (visited e2 :\\: visited e1).\nNotation inord := (@inord infty).\n\n(*******************)\n(* next, and nexts *)\n(*******************)\n\nSection Nexts.\nVariable (D : {set V}).\n\nDefinition nexts (A : {set V}) :=\n  \\bigcup_(v in A) [set w in connect (relfrom (mem D) edge) v].\n\nLemma nexts0 : nexts set0 = set0.\nProof. by rewrite /nexts big_set0. Qed.\n\nLemma nexts1 x :\n  nexts [set x] = x |: (if x \\in D then nexts (successors x) else set0).\nProof.\napply/setP=> y; rewrite /nexts big_set1 !inE.\nhave [->|neq_yx/=] := altP eqP; first by rewrite connect0.\napply/idP/idP=> [/connect1l[]// z/=/andP[/= xD xz zy]|].\n  by rewrite xD; apply/bigcupP; exists z; rewrite !inE.\ncase: ifPn; rewrite ?inE// => xD /bigcupP[z]; rewrite !inE.\nby move=> xz; apply/connect_trans/connect1; rewrite /= xD.\nQed.\n\nLemma nextsU A B : nexts (A :|: B) = nexts A :|: nexts B.\nProof. exact: bigcup_setU. Qed.\n\nLemma nextsS (A : {set V}) : A \\subset nexts A.\nProof. by apply/subsetP=> a aA; apply/bigcupP; exists a; rewrite ?inE. Qed.\n\nLemma nextsT : nexts setT = setT.\nProof. by apply/eqP; rewrite eqEsubset nextsS subsetT. Qed.\n\nLemma nexts_id (A : {set V}) : nexts (nexts A) = nexts A.\nProof.\napply/eqP; rewrite eqEsubset nextsS andbT; apply/subsetP=> x.\nmove=> /bigcupP[y /bigcupP[z zA]]; rewrite !inE => /connect_trans yto /yto zx.\nby apply/bigcupP; exists z; rewrite ?inE.\nQed.\n\nLemma in_nextsW A y : y \\in nexts A -> exists2 x, x \\in A & gconnect x y.\nProof.\nmove=>/bigcupP[x xA]; rewrite inE => xy; exists x => //.\nby apply: connect_sub xy => u v /andP[_ /connect1].\nQed.\n\nEnd Nexts.\n\nLemma sub_nexts (D D' A B : {set V}) :\n  D \\subset D' -> A \\subset B -> nexts D A \\subset nexts D' B.\nProof.\nmove=> /subsetP subD /subsetP subAB; apply/subsetP => v /bigcupP[a /subAB aB].\nrewrite !inE => av; apply/bigcupP; exists a; rewrite ?inE //=.\nby apply: connect_sub av => x y /andP[xD xy]; rewrite connect1//= subD.\nQed.\n\nLemma nextsUI A B C : nexts B A \\subset A ->\n  A :|: nexts (B :&: ~: A) C = A :|: nexts B C.\nProof.\nmove=> subA; apply/setP=> y; rewrite !inE; have [//|/= yNA] := boolP (y \\in A).\napply/idP/idP; first by apply: subsetP; rewrite sub_nexts// subsetIl.\nmove=> /bigcupP[z zr zy]; apply/bigcupP; exists z; first by [].\nrewrite !inE; apply: contraTT isT => Nzy; move: zy; rewrite !inE.\nmove=> /(connect_from (mem (~: A))) /= [t].\nrewrite !inE => -[xtxy zt ty]; move: zt.\nrewrite (@eq_connect _ _ (relfrom (mem (B :&: ~: A)) edge)); last first.\n  by move=> u v /=; rewrite !inE andbCA andbA.\ncase: (altP eqP) xtxy => /= [<-|neq_yt]; first by rewrite (negPf Nzy).\nrewrite implybF negbK => tA zt; rewrite -(negPf yNA) (subsetP subA)//.\nby apply/bigcupP; exists t; rewrite // inE.\nQed.\n\nLemma nexts1_split (A : {set V}) x : x \\in A ->\n  nexts A [set x] = x |: nexts (A :\\ x) (successors x).\nProof.\nmove=> xA; apply/setP=> y; apply/idP/idP; last first.\n  rewrite nexts1 !inE xA; case: (_ == _); rewrite //=.\n  by apply: subsetP; rewrite sub_nexts// subsetDl.\nmove=> /bigcupP[z]; rewrite !inE => /eqP {z}->.\nmove=> /connectP[p /shortenP[[_ _ _ /eqP->//|z q/=/andP[/andP[_ xz]]]]].\nrewrite path_from => /andP[zq] /allP/= qA.\nmove=> /and3P[xNzq _ _] _ ->; apply/orP; right.\napply/bigcupP; exists z; rewrite !inE//.\napply/connectP; exists q; rewrite // path_from zq/=.\napply/allP=> t tq; rewrite !inE qA ?andbT//.\nby apply: contraNneq xNzq=> <-; apply: mem_belast tq.\nQed.\n\n(*******************)\n(* Well formed env *)\n(*******************)\n\nLemma num_le_infty e x : num e x <= infty = (x \\in visited e).\nProof. by rewrite inE. Qed.\n\nLemma num_lt_sn e x : num e x < sn e = (x \\in stack e).\nProof. by rewrite inE. Qed.\n\nLemma visited_visit e x : visited (visit x e) = x |: visited e.\nProof.\nby apply/setP=> y; rewrite !inE ffunE/=; case: (altP eqP); rewrite ?max_card.\nQed.\n\nLemma sub_stack_visited e : stack e \\subset visited e.\nProof.\nby apply/subsetP => x; rewrite !inE => /ltnW /leq_trans ->//; rewrite max_card.\nQed.\n\nLemma sub_new_stack_visited e1 e2: new_stack e1 e2 \\subset visited e2.\nProof. by rewrite (subset_trans _ (sub_stack_visited _)) ?subsetDl. Qed.\n\nSection wfenv.\n\nRecord wf_env e := WfEnv {\n  sub_gsccs : esccs e \\subset gsccs;\n  num_lt_V_is_stack : forall x, num e x < infty -> num e x < sn e;\n  num_sccs : forall x, (num e x == infty) = (x \\in cover (esccs e));\n  le_connect : forall x y, num e x <= num e y < sn e -> gconnect x y;\n}.\n\nVariables (e : env) (e_wf : wf_env e).\n\nLemma num_gt_V x : x \\notin visited e -> num e x > infty.\nProof. by rewrite inE -ltnNge. Qed.\n\nLemma num_lt_V x : (num e x < infty) = (num e x < sn e).\nProof.\napply/idP/idP => [/num_lt_V_is_stack//|]; first exact.\nby move=> /leq_trans; apply; rewrite max_card.\nQed.\n\nLemma num_lt_card x (A : pred V) : visited e \\subset A ->\n  (num e x < #|A|) = (num e x < sn e).\nProof.\nmove=> subeA; apply/idP/idP => /leq_trans.\n  by rewrite -num_lt_V; apply; rewrite max_card.\nby apply; rewrite subset_leq_card.\nQed.\n\nLemma visitedE : visited e = stack e :|: cover (esccs e).\nProof. by apply/setP=> x; rewrite !inE leq_eqVlt -num_sccs// num_lt_V orbC. Qed.\n\nLemma sub_sccs_visited : cover (esccs e) \\subset visited e.\nProof. by apply/subsetP => x; rewrite !inE -num_sccs// => /eqP->. Qed.\n\nLemma stack_visit x : x \\notin visited e -> stack (visit x e) = x |: stack e.\nProof.\nmove=> xNvisited; apply/setP=> y; rewrite !inE/= ffunE/= visited_visit.\nhave [->|neq_yx]//= := altP eqP; first by rewrite cardsU1 xNvisited ltnS ?leqnn.\nby rewrite num_lt_card// subsetUr.\nQed.\n\nEnd wfenv.\n\nLemma wf_visit e x : wf_env e ->\n   (forall y, num e y < sn e -> gconnect y x) ->\n   x \\notin visited e -> wf_env (visit x e).\nProof.\nmove=> e_wf x_connected xNvisited.\nconstructor=> [|y|y|] //=; rewrite ?inE ?ffunE/=.\n- exact: sub_gsccs.\n- rewrite visited_visit cardsU1 xNvisited; case: ifPn => // _.\n  by rewrite num_lt_V// ltnS => /ltnW.\n- have [->|] := altP (y =P x); last by rewrite num_sccs.\n  rewrite -num_sccs// eq_sym !gtn_eqF ?num_gt_V//.\n  by rewrite (@leq_trans #|x |: visited e|) ?max_card// cardsU1 xNvisited.\nmove=> y z; rewrite !ffunE/=.\nhave sub_visit : visited e \\subset visited (visit x e).\n  by apply/subsetP => ?; rewrite visited_visit !inE orbC => ->.\nhave [{y}->|neq_yx] := altP eqP; have [{z}->|neq_zx]//= := altP eqP.\n+ by rewrite num_lt_card//; case: ltngtP.\n+ move=> /andP[/leq_ltn_trans lt/lt].\n  by rewrite num_lt_card//; apply: x_connected.\n+ by rewrite num_lt_card//; apply: le_connect.\nQed.\n\nDefinition subenv e1 e2 := [&&\n  esccs e1 \\subset esccs e2,\n  [forall x, (num e1 x <= infty) ==> (num e2 x == num e1 x)] &\n  [forall x, (num e2 x < sn e1) ==> (num e1 x < sn e1)]].\n\nLemma sub_sccs e1 e2 : subenv e1 e2 -> esccs e1 \\subset esccs e2.\nProof. by move=> /and3P[]. Qed.\n\nLemma sub_snum e1 e2 : subenv e1 e2 -> forall x, num e1 x <= infty ->\n  num e2 x = num e1 x.\nProof. by move=> /and3P[_ /forall_inP /(_ _ _) /eqP]. Qed.\n\nLemma sub_vnum e1 e2 : subenv e1 e2 -> forall x, num e1 x < sn e1 ->\n  num e2 x = num e1 x.\nProof.\nmove=> sube12 x /ltnW num_lt; rewrite (sub_snum sube12)//.\nby rewrite (leq_trans num_lt)// max_card.\nQed.\n\nLemma sub_num_lt e1 e2 : subenv e1 e2 ->\n  forall x, (num e1 x < sn e1) = (num e2 x < sn e1).\nProof.\nmove=> /and3P[_ /forall_inP /(_ _ _)/eqP num_eq /forall_inP] num_lt x.\nhave nume1_lt := num_lt x; apply/idP/idP => // {nume1_lt}nume1_lt.\nby rewrite num_eq ?inE// (leq_trans (ltnW nume1_lt))//  max_card.\nQed.\n\nLemma sub_visited e1 e2 : subenv e1 e2 -> visited e1 \\subset visited e2.\nProof.\nmove=> sube12; apply/subsetP=> x; rewrite !inE => x_visited1.\nby rewrite (sub_snum sube12)// inE.\nQed.\n\nLemma leq_sn e1 e2 : subenv e1 e2 -> sn e1 <= sn e2.\nProof. by move=> sube12; rewrite subset_leq_card// sub_visited. Qed.\n\nLemma sub_stack e1 e2 : subenv e1 e2 -> stack e1 \\subset stack e2.\nProof.\nmove=> sube12; apply/subsetP=> x; rewrite !inE => x_stack.\nby rewrite (sub_vnum sube12)// (leq_trans x_stack)// leq_sn.\nQed.\n\nLemma new_stackE e1 e2 : subenv e1 e2 ->\n  new_stack e1 e2 = [set x | sn e1 <= num e2 x < sn e2].\nProof.\nmove=> sube12; apply/setP=> x; rewrite !inE.\nhave [x_e2|] := ltnP (num e2 x) (sn e2); rewrite ?andbT ?andbF//.\nhave [e1_after|e1_before] /= := leqP (sn e1) (num e1 x).\n  by rewrite leqNgt -sub_num_lt// -leqNgt.\nby rewrite leqNgt -sub_num_lt// e1_before.\nQed.\n\nLemma new_visitedE e1 e2 : wf_env e1 -> wf_env e2 -> subenv e1 e2 ->\n  (new_visited e1 e2) =\n    (new_stack e1 e2) :|: cover (esccs e2) :\\: cover (esccs e1).\nProof.\nmove=> e1_wf e2_wf sube12; rewrite !visitedE//; apply/setP=> x.\nrewrite !inE -!num_sccs -?num_lt_V//; do 2!case: ltngtP => //=.\n  by rewrite num_lt_V// (sub_num_lt sube12)// => ->; rewrite ltnNge max_card.\nby move=> xe2 xe1; move: xe2; rewrite (sub_snum sube12)// ?xe1// ltnn.\nQed.\n\nLemma sub_new_stack_new_visited e1 e2 :\n    subenv e1 e2 -> wf_env e1 -> wf_env e2 ->\n  (new_stack e1 e2) \\subset (new_visited e1 e2).\nProof.\nby move=> e1wf e2wf sube12; rewrite (@new_visitedE e1 e2)// subsetUl.\nQed.\n\nLemma sub_refl e : subenv e e.\nProof. by rewrite /subenv !subxx /=; apply/andP; split; apply/forall_inP. Qed.\nHint Resolve sub_refl.\n\nLemma sub_trans : transitive subenv.\nProof.\nmove=> e2 e1 e3 sub12 sub23; rewrite /subenv.\nrewrite (subset_trans (sub_sccs sub12))// ?sub_sccs//=.\napply/andP; split; apply/forall_inP=> x xP.\n  by rewrite (sub_snum sub23) ?(sub_snum sub12)//.\nhave x2 : num e3 x < sn e2 by rewrite (leq_trans xP)// leq_sn.\nby rewrite (sub_num_lt sub12)// -(sub_vnum sub23)// (sub_num_lt sub23).\nQed.\n\nLemma sub_visit e x : x \\notin visited e -> subenv e (visit x e).\nProof.\nmove=> xNvisited; rewrite /subenv subxx/=; apply/andP; split; last first.\n  by apply/forall_inP => y; rewrite !ffunE/=; case: ifP; rewrite ?ltnn.\napply/forall_inP => y y_in; rewrite !ffunE/=.\nby case: (altP (y =P x)) xNvisited => // <-; rewrite inE y_in.\nQed.\n\nLemma visited_store (A : {set V}) e : A \\subset visited e ->\n  visited (store A e) = visited e.\nProof.\nmove=> A_sub; apply/setP=> x; rewrite !inE/= ffunE.\nby case: ifPn => // /(subsetP A_sub); rewrite inE leqnn => ->.\nQed.\n\nLemma stack_store (A : {set V}) e : A \\subset visited e ->\n  stack (store A e) = stack e :\\: A.\nProof.\nmove=> A_sub; apply/setP => x; rewrite !inE visited_store//= ffunE.\nby case: (x \\in A); rewrite //= ltnNge max_card.\nQed.\n\n(*********************)\n(* DFS specification *)\n(*********************)\n\nDefinition outenv (roots : {set V}) (e e' : env) := [/\\\n  {in new_stack e e' &, gconnected},\n  {in new_stack e e', forall x, exists2 y, y \\in stack e & gconnect x y} &\n  visited e' = visited e :|: nexts (~: visited e) roots ].\n\nVariant dfs_spec_def (dfs : nat * env) (roots : {set V}) e :\n  (nat * env) -> nat -> env -> Type := DfsSpec ne' (n : nat) e' of\n    ne' = (n, e') &\n    n = \\min_(x in nexts (~: visited e) roots) inord (num e' x) &\n    wf_env e' & subenv e e' & outenv roots e e' :\n  dfs_spec_def dfs roots e ne' n e'.\nNotation dfs_spec ne' roots e := (dfs_spec_def ne' roots e ne' ne'.1 ne'.2).\n\nDefinition dfs_correct dfs (roots : {set V}) e := wf_env e ->\n  {in stack e & roots, gconnected} -> dfs_spec (dfs roots e) roots e.\nDefinition dfs1_correct dfs1 x e := wf_env e -> x \\notin visited e ->\n  {in stack e & [set x], gconnected} -> dfs_spec (dfs1 x e) [set x] e.\n\n(*****************)\n(* Correctness *)\n(*****************)\n\nLemma dfsP dfs1 dfsrec (roots : {set V}) e:\n  (forall x, x \\in roots -> dfs1_correct dfs1 x e) ->\n  (forall x, x \\in roots -> forall e1, subenv e e1 ->\n     dfs_correct dfsrec (roots :\\ x) e1) ->\n  dfs_correct (dfs dfs1 dfsrec) roots e.\nProof.\nrewrite /dfs => dfs1P dfsP e_wf roots_connected.\ncase: pickP => /= [x x_roots|]; last first.\n  move=> r0; have {r0}r_eq0 : roots = set0 by apply/setP=> x; rewrite inE.\n  do ?constructor=> //=;\n    rewrite ?setDv ?r_eq0 ?nexts0 ?sub0set ?eqxx ?setU0 ?big_set0 //=;\n    by move=> ?; rewrite inE.\nhave [numx_gt|numx_le]/= := ltnP; last first.\n  have x_visited : x \\in visited e by rewrite inE.\n  case: dfsP => //= [u v ve|_ _ e1 ->-> e1_wf subee1 [new1c new1old visited1E]].\n    by rewrite inE => /andP[_ v_roots]; rewrite roots_connected.\n  constructor => //=.\n    rewrite -[in RHS](setD1K x_roots) nextsU nexts1 inE x_visited/= setU0.\n    by rewrite bigmin_setU /= big_set1/= (@sub_snum e e1)// inordK//.\n  constructor=> //=; rewrite -(setD1K x_roots) nextsU nexts1 inE x_visited/=.\n  by rewrite setU0 setUCA setUA [x |: _](setUidPr _) ?sub1set.\ncase: dfs1P => //=; first by rewrite inE -ltnNge.\n  by move=> u v ue; rewrite inE => /eqP->; apply: roots_connected.\nmove=> _ _  e1 -> -> e1_wf subee1 [new1c new1old visited1E].\ncase: dfsP => //= [u v ue1|_ _ e2 -> -> e2_wf sube12 [new2c new2old visited2E]].\n  rewrite inE => /andP[_ v_roots].\n  have [ue|uNe] := boolP (u \\in stack e); first by rewrite roots_connected.\n  have [|w we] := new1old u; first by rewrite inE ue1 uNe.\n  by move=> /connect_trans->//; rewrite roots_connected//.\nhave sube2 : subenv e e2 by exact: sub_trans sube12.\nhave nexts_split : nexts (~: visited e) roots =\n      nexts (~: visited e) [set x] :|: nexts (~: visited e1) (roots :\\ x).\n  rewrite -[in LHS](setD1K x_roots) nextsU visited1E.\n  by rewrite setCU nextsUI// nexts_id.\nconstructor => //=.\n  rewrite (eq_bigr (inord \\o num e2)).\n   by rewrite -[LHS]/(val (ord_minn _ _)) -bigmin_setU /= -nexts_split.\n  move=> y y_in; rewrite /= (@sub_snum e1 e2)// num_le_infty.\n  by rewrite visited1E setUC inE y_in.\nconstructor => /=.\n+ rewrite -(@setUD _ (stack e1)) ?sub_stack//.\n  apply: connectedU => // y z; last first.\n    rewrite !new_stackE// ?inE => /andP[y_ge y_lt] /andP[z_ge z_lt].\n    rewrite (@le_connect e2) // z_lt (leq_trans _ z_ge)//.\n    by rewrite (sub_vnum sube12)// ltnW.\n  rewrite !new_stackE// ?inE => /andP[y_ge y_lt] /andP[z_ge z_lt].\n  have [|r] := new2old y; rewrite ?new_stackE ?inE ?y_ge//.\n  move=> r_lt /connect_trans->//; have [rz|zr] := leqP (num e1 r) (num e1 z).\n    by rewrite (@le_connect e1)// rz/=.\n  by rewrite new1c ?new_stackE ?inE ?z_ge ?z_lt //= (leq_trans z_ge)// ltnW.\n+ move=> y; rewrite ?new_stackE ?inE// => /andP[y_ge y_lt].\n  have [y_lt1|y_ge1] := ltnP (num e1 y) (sn e1).\n    have [|r] := new1old y; last by exists r.\n    by rewrite new_stackE ?inE// ?y_lt1 -(sub_vnum sube12) ?y_ge.\n  have [|r r_lt1 yr] := new2old y; first by rewrite !inE -leqNgt y_ge1//.\n  rewrite ?inE in r_lt1; have [r_lt|r_ge] := ltnP (num e r) (sn e).\n    by exists r; rewrite ?inE.\n  have [|r' r's rr'] := new1old r; first by rewrite ?inE -leqNgt r_ge r_lt1.\n  by exists r'; rewrite // (connect_trans yr rr').\n+ by rewrite visited2E {1}visited1E nexts_split setUA.\nQed.\n\nLemma dfs1P dfs x e (A := successors x) :\n  dfs_correct dfs A (visit x e) -> dfs1_correct (dfs1 dfs) x e.\nProof.\nrewrite /dfs1 => dfsP e_wf xNvisited x_connected.\nhave subexe: subenv e (visit x e) by exact: sub_visit.\nhave numx : num e x > infty by apply: num_gt_V.\nhave xNstack : x \\notin stack e.\n  by rewrite inE -leqNgt (leq_trans _ numx) ?leqW ?max_card.\nhave xe_wf : wf_env (visit x e).\n  by apply: wf_visit => // y y_lt; rewrite x_connected ?inE.\nhave nexts1E : nexts (~: visited e) [set x] =\n    x |: nexts (~: (x |: visited e)) A.\n  by rewrite nexts1_split ?setDE ?setCU 1?setIC 1?inE.\ncase: dfsP => //=.\n  rewrite stack_visit// => u v; rewrite in_setU1=> /predU1P[->|ue];\n  rewrite inE => /(@connect1 _ edge)// /(connect_trans _)->//.\n  by rewrite x_connected// set11.\nmove=> _ _ e1 //= -> -> e1_wf subxe1 [newc new_old visited1E].\nhave sube1 : subenv e e1 by apply: sub_trans subxe1.\nhave num1x : num e1 x = sn e.\n  by rewrite (sub_snum subxe1)// ?inE ?ffunE/= ?eqxx// max_card.\nrewrite visited_visit in visited1E *.\nhave lt_sn_sn1 : sn e < sn e1.\n  by rewrite (leq_trans _ (leq_sn subxe1))// visited_visit cardsU1 xNvisited.\nhave x_visited1 : x \\in visited e1 by rewrite visited1E inE setU11.\nhave x_stack : x \\in stack e1.\n  by rewrite (subsetP (sub_stack subxe1))//= stack_visit// setU11.\nhave [min_after|min_before] := leqP; last first.\n  constructor => //=.\n    rewrite nexts1E bigmin_setU big_set1 /= inordK ?num1x ?ltnS ?max_card//.\n    by rewrite (minn_idPr _)// ltnW.\n  constructor=> //=; last by rewrite nexts1E setUCA setUA visited1E.\n    move=> y z; have [-> _|neq_yx] := eqVneq y x.\n      by rewrite new_stackE ?inE// -num1x; apply: le_connect.\n    rewrite -(@setUD _ (stack (visit x e))) ?sub_stack//.\n    rewrite [in X in _ :|: X]stack_visit// setDUl setDv setU0.\n    rewrite [_ :\\: stack e](setDidPl _) ?disjoint1s//.\n    rewrite setUC !in_setU1 (negPf neq_yx)/=.\n    move=> y_e1 /predU1P[->|]; last exact: newc y_e1.\n    have [t] := new_old y y_e1; rewrite !inE => t_le /connect_trans->//.\n    rewrite (@le_connect (visit x e))// andbC; move: t_le.\n    by rewrite visited_visit !ffunE /= eqxx cardsU1 xNvisited add1n !ltnS leqnn.\n  move=> y; have [v ve xv] : exists2 v, v \\in stack e & gconnect x v.\n    have [|v] := @eq_bigmin_cond _ _ (mem (nexts (~: (x |: visited e)) A))\n                               (inord \\o num e1).\n      rewrite card_gt0; apply: contraTneq min_before => ->.\n      by rewrite big_set0 -leqNgt max_card.\n    rewrite !inE => v_in min_is_v; move: min_before; rewrite min_is_v/=.\n    rewrite inordK; last by rewrite ltnS num_le_infty visited1E inE v_in orbT.\n    rewrite -sub_num_lt// => v_lt; exists v; rewrite ?inE//.\n    move: v_in => /in_nextsW[z]; rewrite inE => /(@connect1 _ edge).\n    by apply: connect_trans.\n  rewrite -(@setUD _ (stack (visit x e))) ?sub_stack//.\n  rewrite [in X in _ :|: X]stack_visit// setDUl setDv setU0.\n  rewrite [_ :\\: stack e](setDidPl _) ?disjoint1s// setUC !in_setU1.\n  move=> /predU1P[->|]; first by exists v.\n  move=> /new_old[z]; rewrite stack_visit// in_setU1.\n  move=> /predU1P[->|]; last by exists z.\n  by move=> yx; exists v; rewrite // (connect_trans yx).\nhave all_geq y : y \\in nexts (~: visited e) [set x] ->\n  (#|visited e| <= num e1 y) * (num e1 y <= infty).\n  have := min_after; have sn_inord : sn e = inord (sn e).\n    by rewrite inordK// ltnS max_card.\n  rewrite {1}sn_inord; move/bigmin_geqP => /(_ y) y_ge.\n  rewrite nexts1E !inE => /predU1P[->|yA]; rewrite ?num1x ?max_card ?leqnn//.\n  rewrite sn_inord (leq_trans (y_ge _))// ?inordK//;\n  by rewrite ?ltnS num_le_infty visited1E 2!inE yA orbT.\nconstructor => //=.\n- rewrite big1// => y xy; rewrite ffunE new_stackE ?inE//=.\n  have y_visited1 : num e1 y <= infty.\n    by rewrite num_le_infty visited1E -setUA setUCA -nexts1E inE xy orbT.\n  apply/val_inj=> /=; case: ifPn; rewrite ?inordK//.\n  by rewrite all_geq//= -num_lt_V// -leqNgt; move: y_visited1; case: ltngtP.\n- constructor => //=; rewrite ?visited_store ?sub_new_stack_visited//.\n  + rewrite subUset sub_gsccs// andbT sub1set.\n    suff -> : new_stack e e1 = gscc_of x by rewrite pblock_mem ?cover_sccs.\n    apply/setP=> y; rewrite mem_scc /symconnect.\n    have [->|neq_yx] := eqVneq y x.\n      by rewrite connect0 inE xNstack inE num1x lt_sn_sn1.\n    apply/idP/andP=> [|[xy yx]].\n      move=> y_ee1; have y_xee1 : y \\in new_stack (visit x e) e1.\n        by rewrite inE stack_visit// in_setU1 (negPf neq_yx)/= -in_setD.\n      split; last first.\n        have [z] := new_old _ y_xee1.\n        rewrite stack_visit// in_setU1 => /predU1P[->//|/x_connected].\n        by move=> /(_ _ (set11 x))/(connect_trans _) xz /xz.\n      have: y \\in new_visited (visit x e) e1.\n        by apply: subsetP y_xee1; rewrite sub_new_stack_new_visited.\n      rewrite inE visited1E in_setU visited_visit//; case: (y \\in _) => //=.\n      move=> /in_nextsW[z]; rewrite inE=> /(@connect1 _ edge).\n      exact: connect_trans.\n    have /(connect_from (mem (~: visited e))) [z []] := xy; rewrite inE.\n    move=> eq_yz xz zy; have /all_geq [] : z \\in nexts (~: visited e) [set x].\n      by apply/bigcupP; exists x; rewrite !inE.\n    rewrite leqNgt -sub_num_lt// -num_lt_V// -leqNgt => zNstack.\n    have zNcover e' : wf_env e' -> z \\in cover (esccs e') ->\n                      x \\in cover (esccs e').\n      move=> e'_wf /bigcupP[C] Ce zC; apply/bigcupP; exists C => //.\n      have /def_scc: C \\in gsccs by apply: subsetP Ce; apply: sub_gsccs.\n      move=> /(_ _ zC)<-; rewrite mem_scc /= /symconnect (connect_trans zy)//=.\n      by apply: connect_sub xz => ?? /andP[_ /connect1].\n    rewrite leq_eqVlt num_sccs// num_lt_V// => /orP[|z_stack].\n       move=> /zNcover; rewrite -num_sccs// num1x => /(_ _) /eqP eq_V.\n       by rewrite eq_V// ltnNge max_card in lt_sn_sn1.\n    have zNvisited : z \\notin visited e.\n      rewrite inE -ltnNge ltn_neqAle zNstack andbT/= eq_sym num_sccs//.\n      by apply: contraTN isT => /(zNcover _ e_wf); rewrite -num_sccs// gtn_eqF.\n    move: eq_yz; rewrite zNvisited /= => /andP[/eqP eq_yz _].\n    rewrite -eq_yz in zNstack z_stack.\n    by rewrite !inE -num_lt_V// -leqNgt zNstack.\n  + move=> v; rewrite ffunE/=; case: ifPn; rewrite ?ltnn// => vNe12.\n    by rewrite num_lt_V// visited_store.\n  + move=> v; rewrite ffunE /= cover1U [in RHS]inE.\n    by case: ifPn; rewrite ?eqxx//= => vNe12; rewrite -num_sccs//.\n  + move=> y z; rewrite !ffunE; case: ifPn => _.\n      by move=> /andP[/leq_ltn_trans Vsmall/Vsmall]; rewrite ltnNge max_card.\n    by case: ifPn => _; [by rewrite ltnNge max_card andbF|exact : le_connect].\n- rewrite /subenv /= (subset_trans (sub_sccs sube1)) ?subsetUr//=.\n  apply/andP; split; apply/forallP => v; apply/implyP;\n  rewrite ffunE/= new_stackE// ?inE.\n    move=> vs; rewrite (sub_snum sube1)// leqNgt -!num_lt_V// -leqNgt ifN//.\n    by apply/negP => /andP[/leq_ltn_trans Vlt/Vlt]; rewrite ltnNge max_card.\n  by case: ifPn; [move=> _; rewrite ltnNge max_card|rewrite -sub_num_lt].\n- rewrite /outenv stack_store ?visited_store ?sub_new_stack_visited//.\n  rewrite setDDr setDUl setDv set0D set0U setDIl !setDv setI0.\n  split; do ?by move=> ?; rewrite inE.\n  by rewrite visited1E -setUA setUCA -nexts1E.\nQed.\n\nTheorem rec_terminates k (roots : {set V}) e :\n  k >= #|~: visited e| * infty.+1 + #|roots| -> dfs_correct (rec k) roots e.\nProof.\nmove=> k_ge; elim: k => [|k IHk/=] in roots e k_ge *.\n  move: k_ge; rewrite leqn0 addn_eq0 cards_eq0 => /andP[_ /eqP-> e_wf _]/=.\n  constructor=> //=; rewrite /outenv ?nexts0 ?setDv ?big_set0// ?setU0.\n  by split=> // ?; rewrite inE.\napply: dfsP=> x x_roots; last first.\n  move=> e1 subee1; apply: IHk; rewrite -ltnS (leq_trans _ k_ge)//.\n  rewrite (cardsD1 x roots) x_roots add1n -addSnnS ltn_add2r ltnS.\n  by rewrite leq_mul2r //= subset_leq_card// setCS sub_visited.\nmove=> e_wf xNvisited; apply: dfs1P => //; apply: IHk.\nrewrite visited_visit setCU setIC -setDE -ltnS (leq_trans _ k_ge)//.\nrewrite (cardsD1 x (~: _)) inE xNvisited add1n mulSnr -addnA ltn_add2l.\nby rewrite ltn_addr// ltnS max_card.\nQed.\n\nLemma visited0 : visited e0 = set0.\nProof. by apply/setP=> y; rewrite !inE ffunE ltnn. Qed.\n\nLemma stack0 : stack e0 = set0.\nProof. by apply/setP=> y; rewrite !inE ffunE ltnNge leqW ?max_card. Qed.\n\nTheorem tarjan_correct : tarjan = gsccs.\nProof.\nrewrite /tarjan mulnSr; case: rec_terminates.\n- by rewrite visited0 setC0 cardsT.\n- constructor; rewrite /= ?sub0set// => x; rewrite !ffunE//.\n  + by rewrite ltnNge leqW//.\n  + by rewrite gtn_eqF// /cover big_set0 inE.\n  + by move=> y; rewrite !ffunE//= andbC ltnNge leqW// ?max_card.\n- by move=> y; rewrite !inE !ffunE/= ltnNge leqW// max_card.\nmove=> _ _ e -> _ e_wf _ [_]; rewrite stack0 setD0.\nhave [stacke _|[x xe]] := set_0Vmem (stack e); last first.\n  by move=> /(_ _ xe)[?]; rewrite inE.\nrewrite visited0 set0U setC0 nextsT => visitede.\nhave numE x : num e x = infty.\n  apply/eqP; have /setP/(_ x) := visitede.\n  by rewrite visitedE// stacke set0U !inE -num_sccs.\napply/eqP; rewrite eqEsubset sub_gsccs//=; apply/subsetP => _/imsetP[/=x _->].\nhave: x \\in cover (esccs e) by rewrite -num_sccs ?numE//.\nmove=> /bigcupP [C Csccs /(def_scc (subsetP (sub_gsccs e_wf) _ Csccs))] eqC.\nrewrite -eqC (_ : [set _ in _ | _] = gscc_of x)// in Csccs *.\nby apply/setP => y; rewrite !inE mem_scc /=.\nQed.\n\nEnd tarjan.\n"
  },
  {
    "path": "notes/tarjan.md",
    "content": "so as we're going through the depth first search, we store: a stack of visited but not yet assigned vertices, pushed onto the stack in the order they are visited; a set of finalized components; the current serial number; and a \"function\" (map?) of vertices to serial numbers.\n\ntwo mutually recursive functions, they call then `dfs1` and `dsf`, but those are awful names, I'll renamed once I fully understand them\n\n- `dfs` takes a set of roots and an initial environment, returns a pair of an integer and the modified environment. if the roots is empty the integer is `infinity` (what they should have done is just used `option` or something here). Otherwise the returned integer is the minimum of the results of the calls to `dfs1` on non-visited vertices in r and of the serial numbers of the already visited ones.\n- `dfs1`\n\nthe main function creates the initial environment with an empty stack, empty set of components, serial number 0, and an empty map assigning numbers to vertices\n\n\n```\nlet rec dfs1 vertex e =\n  let n0 = e.cur in\n  let (n1, e1) = dfs (successors vertex) (add_stack_incr vertex e) in\n  if n1 < n0\n    then (n1, e1)\n    else\n      let (s2, s3) = split x e1.stack in\n      (+∞, {stack = s3; sccs = add (elements s2) e1.sccs; cur = e1.cur; num = set_infty s2 e1.num})\n\nwith dfs r e =\n  if is_empty r\n    then (+∞, e)\n    else\n      let x = choose r in\n      let r’ = remove x r in\n      let (n1, e1) = if e.num[x] != -1\n        then (e.num[x], e)\n        else dfs1 x e in\n      let (n2, e2) = dfs r’ e1 in (min n1 n2, e2)\n\nlet tarjan () =\n  let e = {stack = []; sccs = empty; cur = 0; num = const (-1)} in\n  let (_, e’) = dfs vertices e in e’.sccs\n\nlet add_stack_incr x e =\n  let n = e.cur in\n  {stack = x :: e.stack; sccs = e.sccs; cur = n+1; num = e.num[x ← n]}\n\nlet rec set_infty s f = match s with\n  | [] → f\n  | x :: s’ → (set_infty s’ f)[x ← +∞] end\n\nlet rec split x s = match s with\n  | [] → ([], [])\n  | y :: s’ → if x = y\n    then ([x], s’)\n    else\n      let (s1’, s2) = split x s’ in\n      (y :: s1’, s2) end\n```\n\nlooks like I need to check out their better version\nhttps://www-sop.inria.fr/marelle/Tarjan/\nhttps://math-comp.github.io/mcb/\n\nŁukasz Czajka and Cezary Kaliszyk. Hammer for coq: Automation for dependent\ntype theory. J. Autom. Reasoning, 61(1-4):423–453, 2018.\n"
  },
  {
    "path": "notes.md",
    "content": "modified orphan rule:\ntraits can have crate *automatic derive implementations* that will \"kick in\" when a type the trait author hasn't explicitly defined an implementation for \"requests\" one. this means such trait authors could merely define manual implementations for the primitive types. this automatic implementation can be superseded by an explicit implementation in the type crate\nwhat about derive arguments for third-party crates? honestly easy to still solve using the newtype pattern\n\n\nhttps://people.mpi-sws.org/~dreyer/papers/sandboxing/paper.pdf\n\n\nhttps://people.mpi-sws.org/~beta/papers/unicoq.pdf\nhttps://www.sciencedirect.com/science/article/pii/S089054010300138X?via%3Dihub\nhttps://golem.ph.utexas.edu/category/2021/08/you_could_have_invented_de_bru.html\nhttps://proofassistants.stackexchange.com/questions/900/when-should-i-use-de-bruijn-levels-instead-of-indices\nhttps://www.sciencedirect.com/science/article/pii/0167642395000216\nhttps://arxiv.org/pdf/1102.2405.pdf\nhttps://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.156.1170\nhttps://davidchristiansen.dk/tutorials/nbe/\n\n\n\nwhen defining the \"handy\" or base equality proposition (`==`), why not make it an `or` over strict definitional equality (`===`) and an existence proof over a more complex setoid equality? or simply not include it as a base at all and let operator chaining allow people to use more complex custom equality concepts themselves?\nmaybe we should pull apart \"definitional\" or \"shallow\" equality from \"computable\" equality? is that useful?\n\n\n\n\nhttps://inria.hal.science/hal-01094195/preview/CIC.pdf\nnotes on `match`\n`match t as x` could probably just be rewritten with a `let` beforehand?\nor do like rust!\n`match let x: I[y1, ... yp] = t {}`\n\nthe `in I y1 . . . yp` is basically a destructuring of the *type* of the match target. it allows you to use the *index* values of the type in the body.\na better syntax that makes this more clear would be `match t: I y1 ... yp { ...match arms... }`\n\nevery arm can have a different type, which means the *type* of the entire match *is itself another `match`!*, but one that returns a type and not a value which has a type\n\nstrictly speaking the `return P` isn't necessary, but it allows you to use the `in whatever` type you've destructured in other places\n\nA match type checks mostly based on its `return` clause.\nto type check a match, you have to:\n\n- check all constructors are accounted for\n- check that each arm's type aligns with the `return` clause, including when the `return` is a function in which case you run the concrete pattern through that function\n\nthis is how \"absurdity\" is possible, when the `return` clause gives a *different* type for the *actual* input data than it does for the concrete constructor arms! this can only happen when the input data is impossible to construct, because otherwise the constructor arms would definitely handle it since by construction they cover every possibility.\n\n```\nDefinition do_inversion (x y: N): Prop :=\n  match x, y with S _, z => False | _,_ => True end.\n\nDefinition le_inversion n (H: le (S n) z): False :=\n  match H in le x y return do_inversion x y with\n  | lez x => True()\n  | leS x y p => True()\n  end.\n```\n\nthere are also rules about which *sort* the match target and the `return` type are, and it *broadly* seems you just have to keep `Type` and `Prop` separate? that seems like a massive oversimplification, but...\n\n\n`fixpoint` definitions just have to make sense and syntactically terminate\n\n\n\nit seems that inductive types with indexed values are *usefully conceptually different than functions*\nsyntactically they're similar, but it's almost like \"primitive\" functions that don't have a body, but instead are just \"declared\" to give a value of a certain type\nyou could definitely frame these as something similar to a generic type, but there's even still a difference between that concept and an indexed type\n\nA universal asserted type constructor would really help\nbasically you don't need `exists` or special case subset types like `vec` if you have a general purpose asserted type that's directly supported in the syntax\n\nyou *don't need index values at all* for `Type` definitions if you have a universal asserted type system.\nthis means you could do index values for `Prop` very differently, not treating them like functions that eventually return `Prop` but something more similar to generics\n\n```\n@(A: Type, R: (A, A) -> Prop)\nprop RT(A, A);\n  // | RTrefl: @(x), RT x x\n  | RTrefl(x): RT x x\n\n  // | RTR: @(x, y), R x y -> RT x y\n  // @() specifies names, types optional, () specifies types without names\n  | RTR@(x, y)(R x y): RT x y\n\n  // | RTtran: @(x, y, z), RT x y -> RT y z -> RT x z\n  | RTtran@(x, y, z)(RT x y, RT y z): RT x z\n\nprop le(N, N);\n  | lez: @(x) -> le(0, x)\n  | leS: @(x, y), le x y -> le(S x, S y)\n\n\nprop le[N, N];\n  | lez(x): [0, x]\n  | leS(x, y)&(le[x, y]): [S x, S y]\n\nor you could use <> instead of []\n```\n\n\nto demonstrate that an imaginary type \"models\" a real type, you have to provide:\n\n- a function that can always convert (the separation logic representation of) the real type to the imaginary type\n- a function that can convert an imaginary type into a real type with a proof that this function is a mirror image of the above function\n\n\n\n\nhttps://www.cs.cmu.edu/~fp/papers/mfps89.pdf\nhttps://github.com/VictorTaelin/calculus-of-constructions\n\nhttps://github.com/coq/coq/wiki/TheoryBehindCoq\n\nhttps://softwarefoundations.cis.upenn.edu/lf-current/ProofObjects.html\nhttps://softwarefoundations.cis.upenn.edu/lf-current/Logic.html\nhttps://www.researchgate.net/figure/Sketch-of-type-checking-rules-in-Coq_fig17_221336389\nhttps://www.williamjbowman.com/tmp/wjb-sized-coq.pdf\nhttps://www.labri.fr/perso/casteran/CoqArt/Tsinghua/C5.pdf\nhttps://hal.science/hal-02380196/document\nhttps://coq.inria.fr/refman/language/cic.html\n\n\n\nhttps://arxiv.org/pdf/2105.12077.pdf\n\n\n\nneed to look at xcap paper and other references in the bedrock paper\n\nhttps://plv.csail.mit.edu/blog/iris-intro.html#iris-intro\nhttps://plv.csail.mit.edu/blog/alectryon.html#alectryon\n\n\n\n\n\n\n\n\n\n\n\nVerified hardware simulators are easy with magmide\n\nEngineers want tools that can give them stronger guarantees about safety robustness and performance, but that tool has to be tractably usable and respect their time\n\nThis idea exists in incentive no man's land. Academics won't think about it or care about it because it merely applies existing work, so they'll trudge along in their tenure track and keep publishing post hoc verifications of existing systems. Engineers won't think about or care about it because it can't make money quickly or be made into a service or even very quickly be used to improve some service\nThis is an idea that carries basically zero short term benefits, but incalculable long term ones, mainly in the way it could shift the culture of software and even mathematics and logic if successful.\nThis project is hoping and gambling that it itself won't even be the truly exciting innovation, but some other project that builds upon it, and that wouldn't have happened otherwise. I'm merely hoping to be the pair of shoulders someone else stands on, and I hope the paradigm shift this project creates is merely assumed to be obvious, that they'll think we were insane to write programs and not prove them correct\n\n\n\n\nhttps://mattkimber.co.uk/avoiding-growth-by-accretion/\nMost effects aren't really effects but environmental capabilities, although sometimes those capabilities come with effects\n\n\n\nTraits, shapes, and the next level of type inference\n\nDiscriminated unions and procedural macros make dynamically typed languages pointless, and they've existed for eighty years. So what gives?\n\nWhat's better than a standard? An automatically checkable and enforceable standard\n\n\nhttps://project-oak.github.io/rust-verification-tools/2021/09/01/retrospective.html\nwe have to go all the way. anything less than the capabilities given by a full proof checker proving theories on the literal environment abstractions isn't going to be good enough, will always have bugs and hard edges and cases that can't be done. but those full capabilties can *contain* other more \"ad hoc\" things like fuzzers, quickcheck libraries, test generators, etc. we must build upon a magmide!\n\n\n\n\n\nstop trying to make functional programming happen, it isn't going to happen\n\n## project values\n\n- **Correctness**: this project should be a flexible toolkit capable of verifying and compiling any software for any architecture or environment. It should make it as easy as possible to model the abstractions presented by any hardware or host system with full and complete fidelity.\n- **Clarity**: this project should be accessible to as many people as possible, because it doesn't matter how powerful a tool is if no one can understand it. To guide us in this pursuit we have a few maxims: speak plainly and don't use jargon when simpler words could be just as precise; don't use a term unless you've given some path for the reader to understand it, if a topic has prerequisites point readers toward them; assume your reader is capable but busy; use fully descriptive words, not vague abbreviations and symbols.\n- **Practicality**: a tool must be usable, both in terms of the demands it makes and its design. This tool is intended to be used by busy people building real things with real stakes.\n- **Performance**: often those programs which absolutely must be fast are also those which absolutely must be correct. Infrastructural software is constantly depended on, and must perform well.\n\nThese values inherently reinforce one another. As we gain more ability to guarantee correctness, we can make programs faster and solve more problems. As our tools become faster, they become more usable. Guaranteeing correctness saves others time and headache dealing with our bugs. As we improve clarity, more people gather to help improve the project, making it even better in every way.\n\nsecondary values, simplicity before consistency before completeness.\n\ncultural values, code of conduct, we're accepting and open and humble.\n\n\n```\nIn the spirit of Richard Gabriel, the Pony philosophy is neither \"the-right-thing\" nor \"worse-is-better\". It is \"get-stuff-done\".\n\nCorrectness\nIncorrectness is simply not allowed. It's pointless to try to get stuff done if you can't guarantee the result is correct.\n\nPerformance\nRuntime speed is more important than everything except correctness. If performance must be sacrificed for correctness, try to come up with a new way to do things. The faster the program can get stuff done, the better. This is more important than anything except a correct result.\n\nSimplicity\nSimplicity can be sacrificed for performance. It is more important for the interface to be simple than the implementation. The faster the programmer can get stuff done, the better. It's ok to make things a bit harder on the programmer to improve performance, but it's more important to make things easier on the programmer than it is to make things easier on the language/runtime.\n\nConsistency\nConsistency can be sacrificed for simplicity or performance. Don't let excessive consistency get in the way of getting stuff done.\n\nCompleteness\nIt's nice to cover as many things as possible, but completeness can be sacrificed for anything else. It's better to get some stuff done now than wait until everything can get done later.\n\nThe \"get-stuff-done\" approach has the same attitude towards correctness and simplicity as \"the-right-thing\", but the same attitude towards consistency and completeness as \"worse-is-better\". It also adds performance as a new principle, treating it as the second most important thing (after correctness).\n\nhttps://www.ponylang.io/discover/#what-is-pony\n```\n\nOverall the difference between \"the-right-thing\" and \"worse-is-better\" can be understood as the difference between upfront and marginal costs. Doing something right the first time is an upfront cost, and once paid decreases marginal costs *forever*.\nThe main problem in software, and the reason \"worse-is-better\" has been winning in an environment of growth-focused viral capitalism, was that it was basically impossible in practice to actually do something the right way! Since our languages haven't ever supported automatic verification we could only hope to weakly attempt to understand what correct even meant and then actually implement it. This meant the cost to chase the truly right thing was unacceptably uncertain.\n\nMagmide promises neither performance nor correctness nor consistency nor completeness, but instead promises the one thing that underlies all of those qualities: knowledge. Complete and total formal knowledge about the program you're writing.\nMagmide is simply a raw exposure of the basic elements of computing, in both the real sense of actual machine instructions and the ideal sense of formal logic. These basic elements can be combined in whatever way someone desires, even in the \"worse-is-better\" way! The main contribution of Magmide is that the tradeoffs one makes can be made *and flagged*. Nothing is done without knowledge.\n\n\nIf you can prove it you can do it\n\n\nNested environments! the tradeoffs made while designing the operating system can directly inform the proof obligations and effects of nested environments\n\nPossible Ways to Improve Automated Proof Checking\n\nchecking assertions from the bottom up and in reverse instruction order, keeping track as we go of what assertions we're concerned with and only pulling along propositions with a known transformation path to those assertions.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nhttps://dl.acm.org/doi/abs/10.1145/3453483.3454084\nhttps://ocamlpro.github.io/verification_for_dummies/\nhttps://arxiv.org/abs/2110.01098\n\n\n\n\n\nIn most of these heap-enabled lambda calculi \"allocation\" just assumes an infinite heap and requires an owned points-to connective in order to read.\nIn the real assembly language, you can always read, but doing so when you don't have any idea what the value you're reading just gets you a machine word of unknown shape, something like uninit or poison. How can I think about this in Magmide? Are there ever programs that intentionally read garbage? That's essentially always random input. Probably there's just a non-determinism token-effect you want.\n\nMy hunch about why my approach is going to prove more robust then continuation-passing-style is because it doesn't seem cps can really directly understand programs as mere data, or would need special primitives to handle it, whereas in my approach it's given, which makes sense since again we're merely directly modeling what the machine actually does.\n\nyeah, lambda rust is amazing, but it's very tightly coupled to the way rust is implemented for host operating system programs. I don't think it's flexible enough to handle arbitrary machine/instruction definitions. it also can't handle irreducible control flow graphs, which absolutely could be created either with `goto` programs or by compiler optimizations that we want to be able to formally justify.\n\nwith incremental verification it probably makes sense to allow possible data races (they don't result in a stuck state) but token flag them\nthe interesting thing is that certain kinds of token problems, such as memory unsafety, data races, overflow, non-termination, actually invalidate the truth of triples! a program doesn't have enough certainty to guarantee *anything* if it isn't basically safe.\n\n\n\nSomeone needs to do for formal verification what rust has been doing for systems programming\n\nThink about sections that are irrevocably exiting, such as sequential sections capped by an always exiting instruction either branch or jump always out or falling through to next, and you can prove that such sections and concatenations of them always in a well founded way exit the section and relate that to steps relation, and then all you need for sections that are self recursive is a well founded relation and a proof that for all steps that self recurse that only stay within the section that have a triple making progress then the section will always exit in a well founded way\nYou can probably generalize this to whole programs if the steps relation is just parameterized by the section not the program\n\nMetaprogramatically embedded dsl\n\n\n\n\nhttps://www.youtube.com/watch?v=ybrQvs4x0Ps\n\n\n\nhttps://arxiv.org/abs/2007.00752\n> In this work, we perform a large-scale empirical study to explore how software developers are using Unsafe Rust in real-world Rust libraries and applications. Our results indicate that software engineers use the keyword unsafe in less than 30% of Rust libraries, but more than half cannot be entirely statically checked by the Rust compiler because of Unsafe Rust hidden somewhere in a library's call chain. We conclude that although the use of the keyword unsafe is limited, the propagation of unsafeness offers a challenge to the claim of Rust as a memory-safe language. Furthermore, we recommend changes to the Rust compiler and to the central Rust repository's interface to help Rust software developers be aware of when their Rust code is unsafe.\n\n\n\nhttp://www.fstar-lang.org/tutorial/\n\n```\nLexicographic orderings\nF* also provides a convenience to enhance the well-founded ordering << to lexicographic combinations of <<. That is, given two lists of terms v₁, ..., vₙ and u₁, ..., uₙ, F* accepts that the following lexicographic ordering:\n\nv₁ << u₁ \\/ (v₁ == u₁ /\\ (v₂ << u₂ \\/ (v₂ == u₂ /\\ ( ... vₙ << uₙ))))\nis also well-founded. In fact, it is possible to prove in F* that this ordering is well-founded, provided << is itself well-founded.\n\nLexicographic ordering are common enough that F* provides special support to make it convenient to use them. In particular, the notation:\n\n%[v₁; v₂; ...; vₙ] << %[u₁; u₂; ...; uₙ]\nis shorthand for:\n\nv₁ << u₁ \\/ (v₁ == u₁ /\\ (v₂ << u₂ \\/ (v₂ == u₂ /\\ ( ... vₙ << uₙ))))\nLet’s have a look at lexicographic orderings at work in proving that the classic ackermann function terminates on all inputs.\n\nlet rec ackermann (m n:nat)\n  : Tot nat (decreases %[m;n])\n  = if m=0 then n + 1\n    else if n = 0 then ackermann (m - 1) 1\n    else ackermann (m - 1) (ackermann m (n - 1))\nThe decreases %[m;n] syntax tells F* to use the lexicographic ordering on the pair of arguments m, n as the measure to prove this function terminating.\n\n\n\n\nMutual recursion\nF* also supports mutual recursion and the same check of proving that a measure of the arguments decreases on each (mutually) recursive call applies.\n\nFor example, one can write the following code to define a binary tree that stores an integer at each internal node—the keyword and allows defining several types that depend mutually on each other.\n\nTo increment all the integers in the tree, we can write the mutually recursive functions, again using and to define incr_tree and incr_node to depend mutually on each other. F* is able to prove that these functions terminate, just by using the default measure as usual.\n\ntype tree =\n  | Terminal : tree\n  | Internal : node -> tree\n\nand node = {\n  left : tree;\n  data : int;\n  right : tree\n}\n\nlet rec incr_tree (x:tree)\n  : tree\n  = match x with\n    | Terminal -> Terminal\n    | Internal node -> Internal (incr_node node)\n\nand incr_node (n:node)\n  : node\n  = {\n      left = incr_tree n.left;\n      data = n.data + 1;\n      right = incr_tree n.right\n    }\n\nNote\nSometimes, a little trick with lexicographic orderings can help prove mutually recursive functions correct. We include it here as a tip, you can probably skip it on a first read.\n\nlet rec foo (l:list int)\n  : Tot int (decreases %[l;0])\n  = match l with\n    | [] -> 0\n    | x :: xs -> bar xs\nand bar (l:list int)\n  : Tot int (decreases %[l;1])\n  = foo l\n\nWhat’s happening here is that when foo l calls bar, the argument xs is legitimately a sub-term of l. However, bar l simply calls back foo l, without decreasing the argument. The reason this terminates, however, is that bar can freely call back foo, since foo will only ever call bar again with a smaller argument. You can convince F* of this by writing the decreases clauses shown, i.e., when bar calls foo, l doesn’t change, but the second component of the lexicographic rdering does decrease, i.e., 0 << 1.\n```\n\n```\nbool is_even(unsigned int n) {\n  if (n == 0) return true;\n  else return is_odd(n - 1);\n}\n\nbool is_odd(unsigned int n) {\n  if (n == 0) return false;\n  else return is_even(n - 1);\n}\n```\n\n\n\n\n\n\nhttps://iris-project.org/tutorial-pdfs/iris-lecture-notes.pdf\nhttps://gitlab.mpi-sws.org/iris/tutorial-popl21\nhttps://gitlab.mpi-sws.org/iris/iris/-/blob/master/docs/heap_lang.md\nhttps://gitlab.mpi-sws.org/iris/iris/blob/master/docs/proof_mode.md\n\n\n\n\nI like the idea of having a `by` operator that can be used to justify passing a variable as some type with the accompanying proof script. so for example you could say `return x by crush`, or more complicated things such as `return x by (something; something)`. whatever level of automatic crushing should the system do by default? should there be a cheap crusher that's always used even without a `by`, and `by _` means \"use a more expensive crusher\"? or does no `by` mean to defer to a proof block? it makes sense to me for no `by` to imply simply deferring (trying to pass something as a type we can quickly verify it can't possibly be is just a type error), whereas `by _` means \"use the crusher configured at this scope\", and something like file/module/section/function/block level crushers can be configured\na small and easy to use operator for embedding the proof language into the computational language would probably go a long way to making Magmide popular and easy to understand.\n\nit would probably be nice to have some shorthand for \"extending\" the proof value of functions and type aliases. something like `fn_name ;theorem` or something that implies adding the assumptions of the thing and the thing itself into the context of the proof, and adds the new proof for further use.\n\n\nlook at koka lang\nwhat magmide can add is *unannotated* effects. polymorphic effects in a language like koka seem (at first glance) to require annotation, whereas in magmide they are simply implied by the `&` combination of assertions that is inherent to what a type is.\na problem with effectual control flow is that we almost never actually *care* about control flow differences. effects in koka seem to me to be too obsessed with \"purity\" in the pedantic functional programming sense, rather than in the *logical correctness* sense. I don't terribly care if a subfunction causes yield effects or catches internal exceptions, I care about its performance and if it is correct or not. magmide is concerned with *correctness* effects, as in whether a function \"poisons\" the program with possible divergence or crashes or other issues. if a sub function does *potentially* dangerous things but internally proves them and it doesn't impact performance in a way I need to be aware of, then I don't care. well, it looks like they *largely* understand that.\nwhat I don't love though is how obsessed they are with effect handlers, to the extent they have `fun` and `val` variants **that are equivalent to just passing down a closure or value!** I guess it allows the effect giving functions to be used in more contexts than would be possible if they just required a function or value\nvalue capabilities seem cool, but in a world where we can verify everything, a global variable is in fact totally acceptable. hmmm\nhere's my main takeaway from koka: I actually think it's pretty cool, but I think it's important to distinguish *control flow* effects from *correctness* effects. they have completely different purposes. in fact I'm hesitant to call what koka has effects at all, they're more like \"contextual handlers\" or something. maybe it's better just to call what *I'm* adding something else.\n\nHonestly it's pretty cool what koka had implemented. But I'm not as excited about it for async, because async code isn't really an effect the more I think about it.\nAsync is a type level manifestation of a completely different mode of execution, in which execution is driven primarily by closures rather than by simple function execution. The program must be completely altered in terms of what data structures it produces and how they are processed\nAlgebraic effects don't save us! Just be because the async effect can theoretically be composed with any other effect type doesn't mean that's actually a good choice. Async is all about recapturing and efficiently using io downtime to do more cpu work. A program simply must be structured differently in order to actually achieve that goal, and designating\nA function that actually awaits anything has now been effectively colored! It doesn't matter that other effects can exist alongside it, any calling function must either propogate the effect or handle it, which is exactly equivalent to how it works in rust\nThe thing that bothers me about the red blue complaint is that it is just ignoring the reality that async programs have to be structured differently if you want to gain the performance benefits. Async functions merely prod engineers to make the right choices given that constraint\nThey're of course free to do whatever they like, they can just block futures sequentially, or use the underlying blocking primitives, or use a language with transparent async, but they'll pick up the performance downsides in each case. But as they say you can choose to pick up one end of the stick but not the other\nI'm feeling more and more that other abstractions handle some of these specific cases better, at least from the perspective of how easy they are to reason about\nFor example the fun and val versions of koka effects can be thought of as implicit arguments that can be separately passed in a different tuple of arguments. This is the same as giving a handler, but with stricter requirements about resumption which means we don't have to think about saving the stack. If some implicit arguments default to a global \"effectful\" function, then a call of that function with that default will itself have that effect\nMagmide could do algebraic effects but monomorphize all the specific instances, making them zero cost. All of this can be justified using branching instructions\nFunctions could use a global \"unsure\" function equivalent to panic but that takes a symbol and a message and the default implicit value of this function is merely an instantiation of panic that ignores the symbol. Calling functions can provide something to replace the implicit panic and have it statically monomorphized\n\n\n\n\nThe term \"gradual verification\" is useful to sell people on what's unique about this project. Magmide is tractable for the same reasons something like typescript or mypy is tractable.\n\n\n\n\nAn exciting idea, of having the \"language\" be generic over a *machine*, which includes possibly none or many register (a bit array of known length) or memory location (also a bit array of known length, which accounts for architecture alignment) banks (a possibly infinite list), and a concrete instruction set. Then we can understand the \"language\" to just be a set of common tools and rules for describing machines.\n\nSome nice things follow from this:\n\n- \"artifical\" machines such as those supported by a runtime of some sort are easily described\n- machines can have multiple register and memory banks of different sizes, and dependent types could allow us to have different access rules or restrictions or semantics for them each. metaprogramming can \"unbundle\" these banks into simple names if that makes sense.\n- it becomes pretty simple to check if a machine is \"abstract\" or \"concrete\", by determining if all the sizes of register/memory banks are known or unknown (or possibly the correct thing is finite vs infinite?). with that information we can add alerts or something if the memory allocation function of an abstract machine isn't somehow fallible (in a concrete machine, failure to allocate is actually just a program failure! it has a more concrete meaning of having too much data of a specific kind. this concrete semantic failure in a concrete machine is what \"bubbles up\" to create an infinite but fallible allocation instruction in an abstract machine)\n\n\n\n\n\n\nI'm starting to think that what I'm really designing is more a *logic* for typed assembly languages. it's not *quite* like llvm precisely, because to really correctly compile to each individual instruction set, those instruction sets have to be fully specified!\nit seems I'm more moving toward a general logic with a *toolbox* of abstract instruction semantics, each of which can be tied concretely to actual applications. but the full instruction set of any architecture can be specified in full.\nit really does point toward having a few different \"families\" of programs:\n\n- embedded programs, in which the exact specifications are known up front\n- os programs? ones here the instruction set can be known but things like memory sizes aren't?\n- runtime programs, ones where some existing environment is already provided, often allowing looser assumptions\n\nprobably what we want is a \"general core\" of instructions we assume every machine has some equivalent for, which we can build the more \"higher level\" languages on top of. then to write a \"backend\" someone would fully specify the instruction set and tie the real instructions to the general core ones, at least if they wanted to be able to support the higher level languages\n\n\n\n\n\n\n\n\nhttps://www.ralfj.de/blog/2020/12/14/provenance.html\njohn regehr office hours\n\n\n\n- dependent type proof checker with purely logical `prop` and `set?` types\n- definition of bits and bit arrays that are given special treatment\n- definition of representation of logical types by bit arrays\n- prop of a \"machine representable\" type. since we can represent props as bit arrays, these can be represented\n- some syntactic metaprogramming commands, which can take basic syntactic structures like strings or tokens or identifiers and transform them into commands or other instructions\n- some semantic metaprogramming commands, which can operate on variables or identifiers or whatever to extract compile time information about them such as their type\n\n- abstract instructions that are able to operate on bit arrays (for now we take as given that these abstract instructions can be validly encoded as bit arrays with a known size, since llvm will actually do the work of translating them for now. in the future we'll absorb what llvm does by creating a system of concrete \"hardware axioms\" that represent the instruction set and memory layout etc of a real machine, and a mapping from the abstract instructions to these concrete ones. in the immediate future we'll also need \"operating system\" axioms, at least until there are operating systems built in bedrock that can simply be linked against)\n- formalization of instruction behaviors, especially control flow, locations, and allocation, and investigations into the well-foundedness of recursive locations\n\n\n---\n\nRandom theorizing about syntax:\n\n```\nfn debug value;\n  match known(value).type;\n    struct(fields) =>\n      for key in fields;\n        print(\"#{key}: #{value[key]}\")\n    nat(n) => print(n)\n    bool(b) => print(b)\n```\n\n---\n\nbasically this project will have a few large steps:\n\nfirst we'll define some really basic proof of concept of a theory of known types. this first version will basically just use the \"computable terms are a subset of terms, and we only bother to typecheck terms once we've fully reduced them to computable terms\". there are a million ways to go about doing this, so we'll just keep it really simple. we'll do this in a \"simply typed lambda calculus\" so it's easy to reason about.\n\nwe'd probably want to demonstrate that this pattern can handle literally any meta-programming-like pattern, including:\n\n- generics\n- bounded generics\n- higher-kinded generics (demonstrate a monad type?)\n- macros of all kinds\n\nprobably our definition of preservation and soundness etc would be a little more nuanced. we'd probably also require the assumption that the known functions reduced \"correctly\", something that would depend on the situation\n\n\nall computable types as simply a bit array with some predicate over that bit array. with this we can define n-ary unions, tuples, structs, the \"intersection\" type that simply \"ands\" together predicate of the two types\n\nthen we can get more interesting by having \"pre\" typechecks. really what we would be doing there is just trying to allow people authoring higher order \"known\" functions to prove their functions correct, rather than simply relying on the \"this known function will eventually reduce to some terms and *those* terms will be typechecked :shrug:\". Basically we want these kinds of authors to have strong typing for their things as well, in a way that goes beyond just typechecking the actual \"type value\" structs that they happen to be manipulating\nwe can think about it this way: in languages like rust, macros just input/output token streams. from a meta-programming perspective, that's like a program just operating on bytestreams at both ends. we want people to be able to type their known functions just as well as all the *actual* functions. what this can allow us to do is typecheck a program, and know *even before we've reduced certain known functions* that those known functions aren't being used appropriately in their context, and won't reduce to terms that will typecheck. in a language that's formally verified, we can then even potentially do the (very scary) potentially very performance enhancing task of *not actually bothering to typecheck the outputs of these known functions*. if we've verified the pre-conditions of the known function, and we have a proof that the known function will always output terms having some particular type, we can just take that type as a given\n\n\nafter we've defined the semantics of types that consist *only* of bit arrays with a predicate, we can start actually defining the language semantics. the big things are n-ary unions and match statements, module paths and the dag, type definition syntax etc. but also the very interesting and potentially massive area of figuring out how we can prove that a loop (or family of co-recursive functions) will always terminate. since this language would have a rich proof system, doing that can actually be tractable and useful from the perspective of programmers.\nlexicographic ordering of stack arguments [\"Proving termination\"](http://www.fstar-lang.org/tutorial/).\n\ndefining and proving correct a type inference algorithm\n\n\nthen we have all the cool little ideas:\n\n- the \"infecting\" types of certain operations. we want infecters for code that potentially panics, diverges, accesses out of bounds memory (unsound), accesses uninitialized memory (unsafe?), or leaks any \"primitive\" resource (we could make this generic by having some kind of predicate that is \"optional\" but as a result of being optional infects the result type. so someone could write a library that has optional invariants about the caller needing to give back resources or something like that, and you can represent a program that doesn't maintain these invariants, but then your types will get infected. perhaps a more interesting way to do this is simply by understanding that any predicate over a type that *doesn't actually make any assertions about the type value's byte array* is like this?). it's probably also true that if we do this \"infecting\" correctly, we can notice programs where *it's certain* that some infected type consequence will happen, and we can warn programmers about it.\n\n- a \"loop\" command that's different than the \"while\" command, in the sense that the program doesn't ask for any proof that a \"loop\" will always terminate, since it's assumed that it might not. we can still maybe have some finer-grained check that simply asks if a loop construct has any code after it, and if it does there has to be *some* way of breaking out of the loop (other than the process being forcefully terminated, such as by receiving some control signal), or else that code is all dead.\n\n- with a tiny language that's so flexible, we can define and reason about a host of ergonomic sugars and optimizations.\n\n- all the little syntax things you like, such as the \"block string\", the different ways of calling and chaining functions, the idea of allowing syntax transforming known functions (or \"keywords\") and of allowing these kinds of functions to be attached as \"members\" of types for maximum ergonomics and allowing things like custom \"question mark\" effectful operators.\n\n- in our language we can define \"stuckness\" in a very different way, because even very bad things like panics or memory unsafe operations aren't *stuck*, they're just *infected*. this means that the entire range of valid machine code can be output by this language. this probably means the reasonable default return type of the `main` function of a program (the one that we will assume if they don't provide their own) should be `() | panic`, so we only assume in the common case that the program might be infected with the panic predicate but not any of the \"unsoundness\" ones.\n\n- \"logical\" vs normal computable types. logical types would basically only be for logic and verification, and not have any actual output artifacts, which means that all the values inhabiting logical types have to be known at compile time, and we can cheat about how efficient they are to make it more convenient to write proofs about them\n\n- wouldn't it be cool to connect proofs about this language to existing verification efforts around llvm?\n\n\n\n\n\nfor co-recursive functions: we can create graphs of dependencies between functions, and we can group them together based on how strongly connected they are. for example\n\nhere we mean that a and b both reference the other (and potentially themselves), so once we enter this group we might never leave\n(a - b)\n\nbut if a and b point to some other function c, if c doesn't reference a or b (or any function that references a or b), then we'll never visit that group of a and b ever again, *but c might be co-recursive with some other family of functions*. however it's still useful in this situation to understand that we have in some important way *made progress in the recursion*.\nit seems that the important idea of a co-recursive family of functions is that from any of the functions you could go through some arbitrary set of steps to reach any of the other functions.\n\n\nif we unbundle both functions and the loops/conditionals into mere basic blocks like in llvm, then it's possible to do this graph analysis over the entire program in the same way. with some interesting new theories about what it means to make progress towards termination *in data* rather than *in control flow*, we can merge the two to understand and check if programs are certainly terminating.\nwe can also unbundle the idea of \"making progress in data\" to \"making progress in predicates\", since our types are basically only defined as predicates over bit arrays.\n\n\n\n\n\n\nafter all this, we really start to think about the proof checker, and how the proof aspect of the language interacts with the known functions.\nthe simplest thing to notice is that theorems are just known functions that transform some instantiation of a logical type (so all the values of the logical type are known at compile time) to a different type.\nthe more interesting thing to notice is that the same kind of really slick \"tacticals\" system that's included in coq can just be *fallible* functions that take props and try to produce proofs of them. this means that the \"typecheck\" function that the compiler actually uses when compiling code should be exposed to all functions (and therefore of course the known functions), and that it should return some kind of `Result` type. that way tacticals can just call it at will with the proofs they've been constructing, and return successfully if they find something the core typechecking algorithm is happy with.\n\n\n\n\n---\n\n\nread introduction to separation logic\n\nthe biggest way to make things more convenient for people is to have the *certified decision procedures* described by CPDT in the form of the type checking functions!!! that means that certain macros or subportions of the language that fit into some decidable type system can just have their type checking function proven and provided as the proof object!\n\n\nrather than have many layers of \"typed\" compilers each emitting the language of the one below it as described in the foundational proof carrying code paper, we simply have *one* very base low level language with arbitrarily powerful metaprogramming and proof abilities! we can create the higher level compilers as embedded constructs in the low level language. we're building *up* instead of *down*.\n\n\nhttps://www.cs.cmu.edu/afs/cs.cmu.edu/project/fox-19/member/jcr/www15818As2011/cs818A3-11.html\n(here now: 3.12 More about Annotated Specifications)\nhttps://www.cs.cmu.edu/afs/cs.cmu.edu/project/fox-19/member/jcr/www15818As2011/ch3.pdf\n\nhttps://en.wikipedia.org/wiki/Bunched_logic\nhttp://www.lsv.fr/~demri/OHearnPym99.pdf\n\nhttps://arxiv.org/pdf/1903.00982.pdf\nhttps://aaronweiss.us/pubs/popl19-src-oxide-slides.pdf\n\nthe real genius of rust is education! people can understand separation logic and formal verification if we teach them well!\n\na basic theory of binary-representable types would also of course be incredibly useful here.\nit seems that carbon could be specified completely by defining the simple `bit` type, and the basic tuple/record, union, and intersection combinators (it seems that intersection types can/should only be used between named records, and to add arbitrary logical propositions to types? it might make sense to only use intersection (as in `&`) for propositions, and have special `merge` and `concat` etc type transformer known functions to do the other kinds of operations people typically think of as being \"intersection\". then `&` is simple and well-defined and can be used to put any propositions together? it might also function nicely as the syntactic form for declaring propositions, instead of `must`, so `type Nat = int & >= 0`)\n\nlogical propositions are so powerful that they could be the entire mode of specifying the base types! booleans are just a `byte` or whatever with props asserting that it can only hold certain values. traits are just props asserting that there exists an implementation in scope satisfying some shape. and of course arbitrary logical stuff can be done, including separation logic/ghost state type things.\n\na reason to include the same kind of constructive inductive propositions is because it provides two ways of attacking\n\na theory of \"known\" types that allows known functions to produce data structures representing these types is probably the most important first step. it seems you could prove that known types are general enough to provide the language with generics, all kinds of macros, and then dramatically expands the reach of usual static type systems by providing \"type functions\", which allow arbitrary derivations (you can easily do rust derived traits) and mapping, which allows for the kind of expressivity that typescript mapped and conditional types allows\n\na general truth to remember about the goals of carbon is to remember what really made rust successful. it didn't shy away from complexity, and it didn't water down what people were capable of achieving, but it did find clean abstractions for complex things, and *especially* it did an amazing job **teaching** people how those concepts work. an amazing next generation language is equal parts good language/abstraction design and pedagogy. if you give people a huge amount of power to build incredible things, *and* you do an excellent job teaching them both how to use and why they should use it, then you've got an amazing project on your hands.\n\nalso very important, and something that the academics have *miserably* failed to do (in addition to their language design and the teaching materials, both of which are usually absolutely dreadful), is building the *tooling* for the language. the tools (think `cargo`) and community infrastructure (think the crates system) are probably *more* important on average for the success of a language community than the language itself. people won't use even the most powerful language if it's an absolute godawful chore to accomplish even the smallest thing with it\n\nanother thing the academics don't realize and do wrong (especially in coq) is just their conventions for naming things! in Coq basic commands like `Theorem` are both inconveniently capitalized and fully spelled out, but important variable names that could hint to us about the semantic content of a variable are given one letter names! that's completely backwards from a usability standpoint, since commands are something we see constantly, can look up in a single manual, and can have syntax highlighters give us context for; whereas variable names are specific to a project or a function/type/proof. shortening `Theorem` to `th` is perfectly acceptable, and lets us cut down on syntax in a reasonable place so we aren't tempted to do so in unreasonable places. `forall` could/should be shortened to something like `fa` or even a single character like `@`. `@(x: X, y: Y)` could be the \"forall tuple\", equivalent to `forall (x: X) (y: Y)`\n\n## building a proof checker!\nhttps://cstheory.stackexchange.com/questions/5836/how-would-i-go-about-learning-the-underlying-theory-of-the-coq-proof-assistant\nhttps://www.irif.fr/~sozeau/research/publications/drafts/Coq_Coq_Correct.pdf\nhttps://github.com/coq/coq/tree/master/kernel\n\nyou should almost certainly do everything you can to understand how coq works at a basic level, and read some of the very earliest papers on proof checkers/assistants to understand their actual machinery. hopefully the very basics are simple, and most of the work is defining theories etc on top. hopefully the footprint of the actual checker is tiny, and it's the standard libraries and proof tactics and such that really create most of the weight\n\ntheory of known types\ncarbon (and various projects in carbon) (when thinking about the compiler and checking refinement/dependent types, it probably makes sense to use an SMT solver for only the parts that you can't come up with a solid algorithm for, like the basic type checking, or to only fall back on it when some simple naive algorithm fails to either prove or disprove)\n\nhttps://www.cs.princeton.edu/~appel/papers/fpcc.pdf\nhttps://www.google.com/books/edition/Program_Logics_for_Certified_Compilers/ABkmAwAAQBAJ?hl=en&gbpv=1&printsec=frontcover\n\nhttps://www3.cs.stonybrook.edu/~bender/newpub/2015-BenderFiGi-SICOMP-topsort.pdf\n\n\nhttps://hal.inria.fr/hal-01094195/document\nhttps://coq.github.io/doc/V8.9.1/refman/language/cic.html\nhttps://ncatlab.org/nlab/show/calculus+of+constructions\nhttps://link.springer.com/content/pdf/10.1007%2F978-0-387-09680-3_24.pdf ????\n\n\nhttps://softwarefoundations.cis.upenn.edu/lf-current/ProofObjects.html\nhas a little portion about type-checking and the trusted base, reassuring\n\n\n\n\"Given a type T, the type Πx : T, B will represent the type of dependent\nfunctions which given a term t : T computes a term of type B[t/x] corresponding to proofs of\nthe logical proposition ∀x : T, B. Because types represent logical propositions, the language will\ncontain empty types corresponding to unprovable propositions.\nNotations. We shall freely use the notation ∀x : A, B instead of Πx : A, B when B represents\na proposition.\"\n\ntheorems are just *dependently* typed functions! this means there's a nice \"warning\" when people construct propositions that don't use their universally quantified arguments, the propositions are vacuous or trivial and don't prove anything about the input.\n\n\n\na big reason unit tests are actually more annoying and slower in development is the need for fixture data! coming up with either some set of examples, or some fixture dataset, or some model that can generate random data in the right shape is itself a large amount of work that doesn't necessarily complement the actual problem at hand. however proving theorems about your implementation is completely complementary, the proofs lock together with the implementation exactly, and you can prove your whole program correct without ever running it! once someone's skilled with the tool, that workflow is massively efficient, since they never have to leave the \"code/typecheck\" loop.\nalso, proof automation is actually *much more general and easier* than automation of testing. with testing, you need to be able to generate arbitrarily specific models and have checking functions *that aren't the same as the unit under test*. doing that is a huge duplication of effort.\n\n\n\nprobably ought to learn about effect systems as well\n\nan infecting proposition for a blocking operation in an async context is a good idea\n\n\n\nhttps://www.cs.cmu.edu/~rwh/papers/dtal/icfp01.pdf\nhttp://www.ats-lang.org/\nlooking at dml and xanadu might be good\n\na very plausible reason that projects like dependently-typed-assembly-language and xanadu and ats haven't worked, is that smart separation logic wasn't there yet, and those languages didn't have powerful enough metaprogramming!\n\nin bedrock, the actual *language* can literally just be a powerful dependently typed assembly language along with the arbitrarily powerful meta-programming allowed by known types and some cool \"keyword\"-like primitives, but then the *programmer facing* language can have structs and functions and all the nice things we're used to but all defined with the meta-programming! the meta-programming is the thing that really allows us to package the hyper-powerful and granular dependent type system into a format that is still usable and can achieve mass adoption. in this way we can kinda call this language \"machine scheme/lisp\".\n\n\na mistake everyone's been making when integrating dependant/refinement types into \"practical\" languages is requiring that only first order logic be used, and therefore that the constraints are *always* automatically solvable. We can still keep those easy forms around just by checking if they're applicable and then using them, but some people need/want more power and we should just give it to them! they'll be on their own to provide proofs, but that's fine!\nwe're really making this tradeoff: would we rather have a bunch of languages that are easy to use but lack a bunch of power that makes us routinely create unsafe programs, or occasionally encounter a problem that's a huge pain in the ass to formalize correctness but we're still capable of doing so? I think we definitely want the second! And we can make abstractions to allow us to work in the first area for a subset of easily-shaped problems but still directly have \"escape hatches\" to the more powerful layer underneath. a full proof checker in a language gives us the exciting option to always include in our meta languages a direct escape hatch right down into the full language!\n\n\n\n\nas a future thing, the whole system can be generic over some set of \"hardware axioms\" (the memory locations and instructions that are intrinsic to an architecture), along with functions describing how to map the \"universal\" instructions and operations into the hardware instructions. an \"incomplete\" mapping could be provided, and compiling programs that included unmapped universal instructions would result in a compiler error\n\n\n\n\n\n\nthis is interesting, he's making a lot of the same arguments I am\nhttps://media.ccc.de/v/34c3-9105-coming_soon_machine-checked_mathematical_proofs_in_everyday_software_and_hardware_development\nhttps://github.com/mit-plv/bedrock2\n\nhttp://adam.chlipala.net/frap/frap_book.pdf\n"
  },
  {
    "path": "old/checker.rs",
    "content": "// use std::collections::{HashMap, HashSet};\n\n// // a Ctx is a map of *mere Idents* (which for now will be strings but later will be intern ids) to CtxItems\n// // a \"Path\" isn't a thing held by a Ctx, a Path is the *result of chained accessors*\n\n// // when you build a Ctx, you only add the base Idents available at this point\n// // the Ctx for all the items in a module includes all the other items in the module, the base names of the siblings of the module, and \"crate\" and \"super\"\n// // within a block of statements, the Ctx inherits all the items from the parent scope (either a Module or another block of statements), and iteratively adds things to the Ctx as it goes\n\n// #[derive(Debug, Clone, PartialEq)]\n// struct Module {\n// \tname: String,\n// \titems: Vec<ModuleItem>,\n// }\n\n\n// // TODO at some point this will be some kind of id to an interned string\n// type Ident = String;\n// // a simple identifier can refer either to some global item with a path like a type or function (types and functions defined inside a block of statements are similar to this, but don't have a \"path\" in the strict sense since they aren't accessible from the outside)\n// // or a mere local variable\n// #[derive(Debug)]\n// struct Ctx {\n// \tscope: HashMap<Ident, CtxItem>,\n// \terrors: Vec<String>,\n// }\n\n// impl Ctx {\n// \tfn from<const N: usize>(pairs: [(Ident, CtxItem); N]) -> Ctx {\n// \t\tCtx { scope: HashMap::from(pairs), errors: Vec::new() }\n// \t}\n// \t// from_iter\n\n// \tfn checked_insert(&mut self, ident: Ident, ctx_item: CtxItem) {\n// \t\tif let Some(existing_item) = self.scope.insert(ident, ctx_item) {\n// \t\t\tself.add_error(format!(\"name {ident} has already been used\"));\n// \t\t}\n// \t}\n\n// \tfn add_error(&mut self, error: String) {\n// \t\tself.errors.push(error);\n// \t}\n// }\n\n// #[derive(Debug)]\n// enum CtxItem {\n// \tModule(Module),\n// \tProp(PropDefinition),\n// \tTheorem(TheoremDefinition),\n// \tLocal(Term),\n// }\n\n\n// fn type_check_program(modules: Vec<Module>) -> CheckResult<()> {\n// \t// TODO add all the prelude uses!\n// \t// `use crate::std` for example\n// \tlet crate_module = Module { name: \"crate\".into(), items: modules.into_iter().map(|m| ModuleItem::Module(m)) };\n// \t// TODO this is where resolutions of all the imported crates would go!\n// \tlet crate_ctx = Ctx::from([(\"crate\", crate_module), (\"super\", crate_module)]);\n// \ttype_check_module(crate_module.clone(), crate_ctx, &crate_module)\n// }\n\n// fn type_check_module(module: Module, parent_ctx: Ctx, crate_module: &Module) -> CheckResult<()> {\n// \tif module.name == \"crate\" || module.name == \"super\" || module.name == \"self\" {\n// \t\terrors.push(format!(\"can't name a module reserved word {module.name}\"));\n// \t}\n// \t// TODO clone from parent? this means errors needs to be a global Arc or something, so every cloned Ctx points to the same thing\n// \tlet mut ctx = parent_ctx.clone();\n\n// \t// this first pass does nothing but build the ctx which checks for name collisions\n// \tfor item in module.items {\n// \t\tname_pass_type_check_module_item(item, &parent_ctx, &mut ctx);\n// \t}\n\n// \tlet ctx = ctx;\n// \tfor item in module.items {\n// \t\tmain_pass_type_check_module_item(item, &ctx, &crate_module, &module);\n// \t}\n// }\n\n// fn name_pass_type_check_module_item(module_item: ModuleItem, parent_ctx: &Ctx, ctx: &mut Ctx) {\n// \tmatch module_item {\n// \t\tUse(use_tree) => {\n// \t\t\t// when checking a use_tree, it can only refer to what's available *before* all the other items in this module are defined\n// \t\t\t// but it of course adds things to the ctx\n// \t\t\ttype_check_use_tree_module_level(use_tree, &parent_ctx, &mut ctx);\n// \t\t},\n// \t\tModule(child_module) => {\n// \t\t\tctx.checked_insert(child_module.name, CtxItem::Module(child_module))\n// \t\t},\n// \t\tProp(prop_definition) => {\n// \t\t\tctx.checked_insert(prop_definition.name, CtxItem::Prop(prop_definition));\n// \t\t},\n// \t\tTheorem(theorem_definition) => {\n// \t\t\tctx.checked_insert(theorem_definition.name, CtxItem::Theorem(theorem_definition));\n// \t\t},\n// \t\tLog(term) => {\n// \t\t\t// logging can't effect the Ctx, but it *can* refer to anything in the file so checking must be deferred\n// \t\t},\n// \t}\n// }\n\n// fn main_pass_type_check_module_item(module_item: ModuleItem, ctx: &Ctx, crate_module: &Module, super_module: &Module) {\n// \tmatch module_item {\n// \t\tUse(_) => { /* nothing to do, already checked this */ },\n// \t\tModule(child_module) => {\n// \t\t\tlet child_ctx = HashMap::from([(\"crate\": crate_module), (\"super\", super_module)]);\n// \t\t\ttype_check_module(child_module, child_ctx, &crate_module);\n// \t\t},\n// \t\tProp(prop_definition) => {\n// \t\t\ttype_check_prop_definition(prop_definition, &ctx)\n// \t\t},\n// \t\tTheorem(theorem_definition) => {\n// \t\t\ttype_check_theorem_definition(theorem_definition, &ctx)\n// \t\t},\n// \t\tLog(term) => {\n// \t\t\ttype_check_term(term, &ctx);\n// \t\t},\n// \t}\n// }\n\n// fn type_check_statements(statements: Vec<Statement>, parent_ctx: &Ctx, crate_module: &Module, super_module: &Module) -> CheckResult<()> {\n// \tlet mut ctx = parent_ctx.clone();\n\n// \tfor statement in statements {\n// \t\tmatch statement {\n// \t\t\t_ => { /* nothing to do for these on this pass */ },\n// \t\t\tInnerModuleItem(module_item) => {\n// \t\t\t\tname_pass_type_check_module_item(module_item, &parent_ctx, &ctx);\n// \t\t\t},\n// \t\t}\n// \t}\n\n// \tfor statement in statements {\n// \t\tmatch statement {\n// \t\t\tLet { name, term } => {\n// \t\t\t\ttype_check_term(term, &ctx);\n// \t\t\t\t// TODO mark this term as invalid?\n// \t\t\t\tctx.checked_insert(name, CtxItem::Local(term));\n// \t\t\t},\n// \t\t\tBare(term) => {\n// \t\t\t\t// this must be a return\n// \t\t\t},\n// \t\t\t// TODO this is problematic, since this ordering would imply inner module items can refer to lets?\n// \t\t\tInnerModuleItem(module_item) => {\n// \t\t\t\tmain_pass_type_check_module_item(module_item, &ctx, &crate_module, &super_module);\n// \t\t\t},\n// \t\t}\n// \t}\n// }\n\n\n\n// fn do_accessors(ctx: &Ctx, mut current_item: CtxItem, accessor_idents: Vec<Ident>) -> CheckResult<(Option<Ident>, CtxItem)> {\n// \tlet mut current_item = current_item;\n// \tlet mut current_ident = None;\n// \tfor accessor_ident in accessor_idents {\n// \t\t// TODO handle super and crate\n// \t\tcurrent_item = ctx.checked_access_path(current_item, accessor_ident)?;\n// \t\tcurrent_ident = Some(accessor_ident);\n// \t}\n// \t(current_ident, current_item)\n// }\n\n// fn type_check_use_tree(use_tree: UseTree, parent_ctx: &Ctx, ctx: &mut Ctx) -> CheckResult<()> {\n// \tlet (base_ident, rest_idents) = use_tree.path_idents;\n// \tlet (current_ident, current_item) = do_accessors(&ctx, parent_ctx.checked_get(base_ident)?, rest_idents)?;\n// \tlet current_ident = current_ident.unwrap_or(base_ident);\n\n// \tmatch &use_tree.cap {\n// \t\tNone => {\n// \t\t\tctx.checked_insert(current_ident, current_item);\n// \t\t},\n// \t\tSome(cap) => match cap {\n// \t\t\t// TODO in case you want to not have two levels of nesting\n// \t\t\t// UseTreeCap::Empty => ctx.checked_insert(current_ident, current_item),\n// \t\t\tUseTreeCap::All => ctx.checked_insert_all(current_item),\n// \t\t\tUseTreeCap::AsName(as_name) => {\n// \t\t\t\tctx.checked_insert(as_name, current_item);\n// \t\t\t},\n// \t\t\tUseTreeCap::InnerTrees(inner_trees) => {\n// \t\t\t\tfor inner_tree in inner_trees {\n// \t\t\t\t\t// TODO don't short circuit on each of these, the internal errors are good enough\n// \t\t\t\t\tlet _ = type_check_use_tree(inner_tree, &parent_ctx, &mut ctx);\n// \t\t\t\t}\n// \t\t\t},\n// \t\t},\n// \t}\n// }\n\n\n// #[derive(Debug, Clone, PartialEq)]\n// enum TypeBody {\n// \tUnit,\n// \t// Tuple(Vec<TypeReference>),\n// \t// Record(Vec<FieldDefinition>),\n// \t// Union(Vec<VariantDefinition>),\n// \t// AnonymousUnion(Vec<TypeReference>)\n// }\n\n// // #[derive(Debug)]\n// // struct FieldDefinition {\n// // \tname: String,\n// // \ttype: TypeReference,\n// // }\n\n// // #[derive(Debug)]\n// // struct VariantDefinition {\n// // \tname: String,\n// // \ttype_body: TypeBody,\n// // }\n\n// #[derive(Debug)]\n// enum Pattern {\n// \t// for now only qualified *nominal* patterns are accepted? otherwise these constructor_names would be Option?\n// \tUnit { constructor_name: String },\n// \tCompound { constructor_name: String, inner_patterns: Vec<NamedPattern>, is_record: bool },\n// \tUnion(Vec<Pattern>),\n// }\n\n// #[derive(Debug)]\n// struct NamedPattern {\n// \tname: String,\n// \tpattern: Option<Pattern>,\n// }\n\n// #[derive(Debug, Clone, PartialEq)]\n// enum ModuleItem {\n// \tUse(UseTree),\n// \tModule(Module),\n// \tProp(PropDefinition),\n// \tTheorem(TheoremDefinition),\n// \tLog(Term),\n// \t// Model,\n// \t// Procedure,\n// }\n\n// impl ModuleItem {\n// \tfn give_name(&self) -> Option<&String> {\n// \t\tmatch self {\n// \t\t\tModuleItem::Use(_) => None,\n// \t\t\tModuleItem::Prop(PropDefinition { name, .. }) => Some(name),\n// \t\t\tModuleItem::Theorem(TheoremDefinition { name, .. }) => Some(name),\n// \t\t}\n// \t}\n// }\n\n\n// #[derive(Debug, Clone, PartialEq)]\n// struct UseTree {\n// \tpath_idents: (String, Vec<String>),\n// \tcap: UseTreeCap,\n// }\n\n// #[derive(Debug, Clone, PartialEq)]\n// enum UseTreeCap {\n// \tEmpty,\n// \tAll,\n// \tAsName(String),\n// \tInnerTrees(Vec<UseTree>),\n// }\n\n// impl UseTree {\n// \tfn basic(base_path: &'static str) -> UseTree {\n// \t\tUseTree { base_path: base_path.into(), cap: UseTreeCap::Empty }\n// \t}\n// \tfn basic_as(base_path: &'static str, as_name: &'static str) -> UseTree {\n// \t\tUseTree { base_path: base_path.into(), cap: UseTreeCap::AsName(as_name.into()) }\n// \t}\n// }\n\n// impl UseTreeCap {\n// \tfn inners<const N: usize>(static_inner_paths: [&'static str; N]) -> UseTreeCap {\n// \t\tlet mut inner_paths = vec![];\n// \t\tfor static_inner_path in static_inner_paths {\n// \t\t\tinner_paths.push(UseTree::basic(static_inner_path));\n// \t\t}\n// \t\tUseTreeCap::InnerTrees(inner_paths)\n// \t}\n// }\n\n// #[derive(Debug, Clone, PartialEq)]\n// struct PropDefinition {\n// \tname: String,\n// \ttype_body: TypeBody\n// }\n\n// #[derive(Debug, Clone, PartialEq)]\n// struct TheoremDefinition {\n// \tname: String,\n// \t// parameters: Vec<(NamedPattern, Option<Term>)>,\n// \treturn_type: Term,\n// \tstatements: Vec<Statement>,\n// }\n\n// #[derive(Debug, Clone, PartialEq)]\n// enum Statement {\n// \tBare(Term),\n// \tLet { name: String, term: Term },\n// \t// Return(Term),\n// \tInnerModuleItem(ModuleItem),\n// }\n\n// #[derive(Debug, Clone, PartialEq)]\n// enum Term {\n// \tLone(String),\n// \tChain(String, Vec<ChainItem>),\n// \tBlock { statements: Vec<Term> },\n// \tMatch {\n// \t\tdiscriminant: Term,\n// \t\t// discriminant_pattern: Pattern,\n// \t\treturn_type: Term,\n// \t\tarms: Vec<MatchArm>\n// \t},\n// \tFunc { parameters: Vec<NamedPattern>, return_type: Term, statements: Vec<Term> },\n// }\n\n// #[derive(Debug)]\n// struct MatchArm {\n// \tpattern: Pattern,\n// \tstatements: Vec<Term>,\n// }\n\n// #[derive(Debug)]\n// enum ChainItem {\n// \tAccess(String),\n// \tCall { arguments: Vec<Term> },\n// \t// IndexCall { arguments: Vec<Term> },\n// \t// TODO yikes? using a complex term to return a function that's called freestanding?\n// \tFreeCall { target: Term, arguments: Vec<Term> },\n// \t// tapping is only useful for debugging, and should be understood as provably not changing the current type\n// \tCatchCall { parameters: Either<NamedPattern, Vec<NamedPattern>>, statements: Vec<Term>, is_tap: bool },\n// \tChainedMatch { return_type: Term, arms: Vec<MatchArm> },\n// }\n\n\n\n// fn type_check_module_item(item: ModuleItem) {\n// \tmatch item {\n// \t\tModuleItem::Use(use_tree) => type_check_use_tree(use_tree),\n// \t\tModuleItem::Prop(PropDefinition { name, type_body }) => {\n// \t\t\t// TODO check that the name hasn't already been used, or perhaps that's handled by earlier stages?\n// \t\t\t// maybe instead check if this definition has already been flagged, and skip checking if it has\n\n// \t\t\t// check that the definition only refers to things that exist and are valid\n// \t\t\tmatch type_body {\n// \t\t\t\tUnit => { /* nothing to check! perhaps warn to just use std library's \"Trivial\" prop though */ },\n// \t\t\t\t// Tuple => ,\n// \t\t\t\t// Record,\n// \t\t\t\t// Union,\n// \t\t\t}\n// \t\t},\n// \t\tModuleItem::Theorem(TheoremDefinition { name, return_type, statements }) => {\n// \t\t\t// TODO check name isn't already used?\n\n// \t\t\t// TODO check function doesn't have infinite recursion\n// \t\t\t// check that return type matches type implied by statements\n// \t\t\tmatch type_check_statements(statements) {\n// \t\t\t\tNone => {\n// \t\t\t\t\tinvalid_items.insert(make_path_absolute(name));\n// \t\t\t\t},\n// \t\t\t\tSome(inferred_type) => {\n// \t\t\t\t\tif !type_assignable(inferred_type, return_type) {\n// \t\t\t\t\t\tinvalid_items.push(item);\n// \t\t\t\t\t\terrors.push(not_assignable_error(inferred_type, return_type));\n// \t\t\t\t\t}\n// \t\t\t\t},\n// \t\t\t}\n// \t\t},\n\n// \t\tModuleItem::Log(term) => {\n// \t\t\t// TODO make sure this can actually be performed but otherwise do nothing to the context\n// \t\t},\n// \t}\n// }\n\n// fn type_check_statements(statements: Vec<Statement>) -> TypeReference {\n// \t// TODO have to build this from existing module_ctx\n// \tlet mut local_ctx = HashMap::new();\n\n// \tlet mut statements = statements.into_iter().peekable();\n// \twhile let Some(statement) = statements.next()  {\n// \t\tmatch statement {\n// \t\t\tStatement::Let { name, term } => {\n// \t\t\t\tlet inferred_type = type_check_term(term);\n\n// \t\t\t\tlet existing_item = local_ctx.insert(name, inferred_type);\n// \t\t\t\tif existing_item.is_some() {\n// \t\t\t\t\terrors.push(format!(\"variable {name} is already defined\"));\n// \t\t\t\t}\n// \t\t\t},\n\n// \t\t\tStatement::InnerModuleItem(module_item) => {\n// \t\t\t\t// TODO add this module item to the running local_ctx\n// \t\t\t},\n\n// \t\t\t// this is proof checker, which means there's no such thing as mutation or effects,\n// \t\t\t// which means leaving a term bare can only mean this should be the resolved value of this line of statements\n// \t\t\tStatement::Bare(term) => {\n// \t\t\t\tif statements.peek().is_some() {\n// \t\t\t\t\terrors.push(format!(\"unreachable code\"));\n// \t\t\t\t}\n// \t\t\t\treturn Some(type_check_term(term, local_ctx));\n// \t\t\t},\n\n// \t\t\t// TODO return is a control flow concept that could still be interesting and useful in an immutable language, since a `let` could have a block or match or if or \"functional for\" (a function that is being called with a block) where return captures control flow\n// \t\t\t// this means a return can effect the inferred type of a line of statements *above* this one\n\n// \t\t\t// \"control flow\" in this context is *actually* just desugaring to a version of the function where things like a match have been moved up a level\n// \t\t\t// let a = match something { one => return 1, two => do_something_else() }; a + 2 // same as\n// \t\t\t// match something { one => 1, two => let a = do_something_else(); a + 2 }\n// \t\t\t// Term::Return(term)\n// \t\t}\n// \t}\n\n// \terrors.push(format!(\"statements never resolve to a value, which doesn't make sense in a proof checker\"));\n// \tNone\n// }\n\n// fn type_check_term(term: Term, ctx: &Ctx) -> Term {\n// \tmatch term {\n// \t\tTerm::Ident(ident) => {\n// \t\t\t// TODO why am I afraid this isn't correct or will recurse infinitely?\n// \t\t\tctx.infer_term_type(ident)\n// \t\t},\n// \t\tTerm::Block { statements } => {\n// \t\t\ttype_check_statements(statements)\n// \t\t},\n// \t\tTerm::Match { discriminant, return_type, arms } => {\n// \t\t\tlet discriminant_type = unimplemented!();\n// \t\t\tfor arm in arms {\n// \t\t\t\tcheck_pattern_compatible(discriminant_type, arm.pattern);\n// \t\t\t\t// all the magic is hiding in check_assignable, which has to do reduction and things in complex cases\n// \t\t\t\tcheck_assignable(type_check_statements(arm.statements), return_type)\n// \t\t\t}\n// \t\t},\n// \t\tTerm::Chain(first, rest) => {\n// \t\t\tlet mut chain_ctx = ctx.clone();\n// \t\t\tlet mut current_type = type_check_chain_root(first, &mut chain_ctx)?;\n// \t\t\tfor chain_item in rest {\n// \t\t\t\tcurrent_type = type_check_chain_item(chain_item, current_type, &mut chain_ctx)?;\n// \t\t\t}\n// \t\t\tcurrent_type\n// \t\t},\n// \t\tTerm::Func { parameters, return_type, statements } => {\n// \t\t\ttype_check_named_patterns(parameters, local_ctx);\n// \t\t\tcheck_assignable(type_check_statements(statements), return_type)\n// \t\t},\n// \t}\n// }\n\n// fn type_check_named_patterns(named_patterns: Vec<NamedPattern>, pattern_names: &mut HashSet<String>) {\n// \tfor named_pattern in named_patterns {\n// \t\tpattern_names.insert(named_pattern.name)?.get_mad_if_exists();\n// \t\tif let Some(pattern) = named_pattern.pattern {\n// \t\t\ttype_check_pattern(pattern, pattern_names);\n// \t\t}\n// \t}\n// }\n\n// fn type_check_pattern(pattern: Pattern, pattern_names: &mut HashSet<String>) {\n// \tmatch expr {\n// \t\tPattern::Unit { constructor_name } => {\n// \t\t\t// TODO look up this constructor_name and see if it exists and is compatible with being a type\n// \t\t},\n// \t\tPattern::Compound { constructor_name, inner_patterns, is_record } => {\n// \t\t\t// TODO check constructor_name exists and matches with is_record\n// \t\t\ttype_check_named_patterns(inner_patterns, pattern_names);\n// \t\t},\n// \t\tPattern::Union(patterns) => {\n// \t\t\tfor pattern in patterns {\n// \t\t\t\ttype_check_pattern(pattern, pattern_names);\n// \t\t\t}\n// \t\t},\n// \t}\n// }\n\n// fn type_check_chain_root(chain_root: ChainRoot, chain_ctx: &mut HashMap<String, Term>) -> Term {\n// \tmatch chain_root {\n// \t\tChainRoot::Path(path) => {\n// \t\t\t// TODO look up this path in the context\n// \t\t},\n// \t\tChainRoot::Call { path, arguments } => {\n// \t\t\t// TODO check the path exists, is callable, and its parameters match the arguments\n// \t\t},\n// \t}\n// }\n\n// fn type_check_chain_item(chain_item: ChainItem, current_type: Term, chain_ctx: &mut HashMap<String, Term>) -> Term {\n// \tmatch chain_item {\n// \t\tChainItem::FreeCall { path, arguments, is_bare } => {\n// \t\t\tif let Some(_) = current_type && is_bare {\n// \t\t\t\terrors.push(format!(\"used a bare call in the middle of a chain\"));\n// \t\t\t\treturn\n// \t\t\t}\n// \t\t},\n// \t\tChainItem::Access(accessor) => {\n// \t\t\t// TODO check this accessor exists on this type, give type of accessor\n// \t\t},\n// \t\tChainItem::AccessCall { accessor, arguments } => {\n// \t\t\t// TODO check the accessor exists, is callable, and its parameters match the arguments\n// \t\t},\n// \t\tChainItem::CatchCall { parameters, statements, is_tap } => {\n// \t\t\tmatch parameters {\n// \t\t\t\tEither::Left(parameter) => {\n// \t\t\t\t\tcheck_assignable(current_type, parameter)\n// \t\t\t\t\t// return if fail?\n// \t\t\t\t},\n// \t\t\t\tEither::Right(parameters) => {\n// \t\t\t\t\t// TODO type check current_type is a spreadable thing that matches the parameters\n// \t\t\t\t},\n// \t\t\t}\n// \t\t\t// TODO enrich the ctx with the parameters\n// \t\t\tlet inferred_type = type_check_statements(statements);\n// \t\t\t// a tapping call is only good for debugging, and doesn't effect the type\n// \t\t\tif is_tap { current_type } else { inferred_type }\n// \t\t},\n// \t}\n// }\n\n// fn type_check_use_tree(use_tree: UseTree) {\n// \tlet UseTree { base_path, cap } = use_tree;\n// \t// check that base_path exists\n// \tif !module_ctx.has(base_path) {\n// \t\terrors.push(format!(\"{base_path} doesn't exist\"));\n// \t}\n\n// \tmatch cap {\n// \t\tUseTreeCap::Empty => { /* nothing to check if final segment name has already been checked for validity */ }\n// \t\tUseTreeCap::All => {\n// \t\t\t// TODO check base_path refers to something with importable members\n// \t\t},\n// \t\tUseTreeCap::AsName(as_name) => {\n// \t\t\t// TODO check that as_name hasn't already been used, or perhaps that's handled by earlier stages?\n// \t\t},\n// \t\tUseTreeCap::InnerTrees(inner_trees) => {\n// \t\t\tfor inner_tree in inner_trees {\n// \t\t\t\ttype_check_use_tree(inner_tree);\n// \t\t\t}\n// \t\t},\n// \t}\n// }\n\n\n// fn check_assignable(observed_type: Term, desired_type: Term) -> Term {\n// \tif !type_assignable(observed_type, desired_type) {\n// \t\terrors.push(format!(\"{observed_type} not assignable to {desired_type}\"));\n// \t}\n// \tobserved_type\n// }\n\n// // TODO this is a proof checker, which means types are just terms\n// // this is where we need to do canonicalization and reduction and check for equivalance\n// fn type_assignable(observed_type: Term, desired_type: Term) -> bool {\n// \tunimplemented!()\n// }\n\n\n// #[cfg(test)]\n// mod tests {\n// \tuse super::*;\n\n// \tfn make_path(path: Path) -> Term {\n// \t\tTerm::Lone(ChainRoot::Path(\"true\"))\n// \t}\n// \tfn make_call(path: Path, arguments: Vec<Term>) -> Term {\n// \t\tTerm::Lone(ChainRoot::Call { path: \"invert\", arguments: vec![make_true()] })\n// \t}\n\n// \tfn make_trivial_prop() -> ModuleItem {\n// \t\tModuleItem::Prop(PropDefinition {\n// \t\t\tname: \"trivial\".into(),\n// \t\t\ttype_body: TypeBody::Unit,\n// \t\t})\n// \t}\n// \tfn make_give_trivial_thm() -> ModuleItem {\n// \t\tModuleItem::Theorem(TheoremDefinition {\n// \t\t\tname: \"give_trivial\".into(),\n// \t\t\treturn_type: Term::Ident(\"trivial\".into()),\n// \t\t\tstatements: vec![\n// \t\t\t\tStatement::Return(Term::Ident(\"trivial\".into())),\n// \t\t\t],\n// \t\t})\n// \t}\n\n// \t#[test]\n// \tfn test_reduce_term(arg: Type) -> RetType {\n// \t\t// type bool = true | false\n// \t\t// use bool::*\n// \t\t// proc invert(b: bool): bool; match b; true; false, false; true\n// \t\tlet program = Program { modules: vec![\n// \t\t\tModule { name: \"main\".into(), items: vec![make_bool_type(), make_invert_proc()], child_modules: vec![] },\n// \t\t] };\n\n// \t\tlet term = make_call(\"invert\", vec![make_path(\"true\")]);\n// \t\tassert_eq!(reduce_term(term, local_ctx), make_path(\"crate::main::bool::false\"));\n\n// \t\tlet term = make_call(\"invert\", vec![make_path(\"b\")]);\n// \t\t// TODO enrich local_ctx with b: bool, so its some opaque thing\n// \t\t// match b; true; false, false; true\n// \t\tassert_eq!(reduce_term(term, local_ctx), make_match(\"b\", vec![(\"true\", \"false\"), (\"false\", \"true\")]));\n\n\n// \t\t// prop trivial\n// \t\t// thm trivial_proven: trivial = trivial\n\n// \t\tlet term = make_path(\"trivial_proven\");\n// \t\tassert_eq!(reduce_term(term, local_ctx), make_path(\"crate::main::trivial\"));\n// \t}\n\n\n// \t#[test]\n// \tfn test_type_check_trivial() {\n// \t\tlet program = Program { modules: vec![\n// \t\t\tModule { name: \"main\".into(), items: vec![make_trivial_prop(), make_give_trivial_thm()], child_modules: vec![] },\n// \t\t] };\n\n// \t\tlet mut errors = vec![];\n// \t\ttype_check_program(program, &mut errors);\n// \t\tassert_eq!(errors, vec![]);\n// \t}\n\n// \t#[test]\n// \tfn test_build_program_path_index() {\n// \t\tlet trivial_prop = make_trivial_prop();\n// \t\tlet give_trivial_thm = make_give_trivial_thm();\n\n// \t\tlet program_path_index = build_program_path_index(Program { modules: vec![\n// \t\t\tModule { name: \"main\".into(), items: vec![trivial_prop.clone(), give_trivial_thm.clone()], child_modules: vec![] },\n// \t\t] });\n\n// \t\tlet expected = HashMap::from([\n// \t\t\t(\"crate::main::trivial\".into(), trivial_prop.clone()),\n// \t\t\t(\"crate::main::give_trivial\".into(), give_trivial_thm.clone()),\n// \t\t]);\n// \t\tassert_eq!(program_path_index, expected);\n\n\n// \t\tlet program_path_index = build_program_path_index(Program { modules: vec![\n// \t\t\tModule { name: \"main\".into(), items: vec![trivial_prop.clone(), give_trivial_thm.clone()], child_modules: vec![\n// \t\t\t\tModule { name: \"main_child\".into(), items: vec![give_trivial_thm.clone()], child_modules: vec![] },\n// \t\t\t] },\n\n// \t\t\tModule { name: \"side\".into(), items: vec![trivial_prop.clone(), give_trivial_thm.clone()], child_modules: vec![\n// \t\t\t\tModule { name: \"side_child\".into(), items: vec![give_trivial_thm.clone()], child_modules: vec![] },\n// \t\t\t] },\n// \t\t] });\n\n// \t\tlet expected = HashMap::from([\n// \t\t\t(\"crate::main::trivial\".into(), trivial_prop.clone()),\n// \t\t\t(\"crate::main::give_trivial\".into(), give_trivial_thm.clone()),\n// \t\t\t(\"crate::main::main_child::give_trivial\".into(), give_trivial_thm.clone()),\n\n// \t\t\t(\"crate::side::trivial\".into(), trivial_prop.clone()),\n// \t\t\t(\"crate::side::give_trivial\".into(), give_trivial_thm.clone()),\n// \t\t\t(\"crate::side::side_child::give_trivial\".into(), give_trivial_thm.clone()),\n// \t\t]);\n// \t\tassert_eq!(program_path_index, expected);\n// \t}\n\n// \t#[test]\n// \tfn test_build_module_ctx() {\n// \t\tlet module_path = \"crate\";\n\n// \t\tlet side_use = ModuleItem::Use(UseTree {\n// \t\t\t// TODO \"bare\" references like this are assumed to be \"relative\", so at the same level as the current module\n// \t\t\t// TODO you could also do super and root\n// \t\t\tbase_path: \"side\".into(),\n// \t\t\tcap: UseTreeCap::inners([\"whatever\", \"other\"]),\n// \t\t});\n// \t\tlet module = Module { name: \"main\".into(), items: vec![side_use, make_trivial_prop(), make_give_trivial_thm()], child_modules: vec![] };\n\n// \t\tlet expected = HashMap::from([\n// \t\t\t(\"trivial\".into(), \"crate::main::trivial\".into()),\n// \t\t\t(\"give_trivial\".into(), \"crate::main::give_trivial\".into()),\n// \t\t\t(\"whatever\".into(), \"crate::side::whatever\".into()),\n// \t\t\t(\"other\".into(), \"crate::side::other\".into()),\n// \t\t]);\n// \t\tassert_eq!(build_module_ctx(module_path, &module), expected);\n\n\n// \t\tlet side_use = ModuleItem::Use(UseTree {\n// \t\t\tbase_path: \"crate::side::child\".into(),\n// \t\t\tcap: UseTreeCap::InnerTrees(vec![\n// \t\t\t\tUseTree::basic(\"whatever\"),\n// \t\t\t\tUseTree::basic_as(\"other\", \"different\"),\n// \t\t\t\tUseTree { base_path: \"nested::thing\".into(), cap: UseTreeCap::inners([\"self\", \"a\", \"b\"]) },\n// \t\t\t]),\n// \t\t});\n// \t\tlet module = Module { name: \"main\".into(), items: vec![side_use, make_trivial_prop(), make_give_trivial_thm()], child_modules: vec![] };\n\n// \t\tlet expected = HashMap::from([\n// \t\t\t(\"trivial\".into(), \"crate::main::trivial\".into()),\n// \t\t\t(\"give_trivial\".into(), \"crate::main::give_trivial\".into()),\n// \t\t\t(\"whatever\".into(), \"crate::side::child::whatever\".into()),\n// \t\t\t(\"different\".into(), \"crate::side::child::other\".into()),\n// \t\t\t(\"thing\".into(), \"crate::side::child::nested::thing\".into()),\n// \t\t\t(\"a\".into(), \"crate::side::child::nested::thing::a\".into()),\n// \t\t\t(\"b\".into(), \"crate::side::child::nested::thing::b\".into()),\n// \t\t]);\n// \t\tassert_eq!(build_module_ctx(module_path, &module), expected);\n// \t}\n\n// \t#[test]\n// \tfn test_make_path_absolute() {\n// \t\tfor (current_absolute_path, possibly_relative_path, expected) in [\n// \t\t\t(\"crate\", \"crate::m\", \"crate::m\"),\n// \t\t\t(\"crate\", \"m\", \"crate::m\"),\n// \t\t\t(\"crate\", \"m::child\", \"crate::m::child\"),\n// \t\t\t(\"crate\", \"crate::m::child\", \"crate::m::child\"),\n\n// \t\t\t(\"crate::a::b::c\", \"crate::m\", \"crate::m\"),\n// \t\t\t(\"crate::a::b::c\", \"crate::m::child\", \"crate::m::child\"),\n// \t\t\t(\"crate::a::b::c\", \"m\", \"crate::a::b::c::m\"),\n// \t\t\t(\"crate::a::b::c\", \"m::child\", \"crate::a::b::c::m::child\"),\n// \t\t] {\n// \t\t\tassert_eq!(make_path_absolute(current_absolute_path, possibly_relative_path), expected);\n// \t\t}\n// \t}\n\n// \t#[test]\n// \tfn test_resolve_reference() {\n// \t\tlet program_path_index = HashMap::from([\n// \t\t\t(\"crate::main::trivial\".into(), make_trivial_prop()),\n// \t\t\t(\"crate::main::give_trivial\".into(), make_give_trivial_thm()),\n// \t\t]);\n// \t\tlet ctx = HashMap::from([]);\n// \t\tlet current_path = \"crate::main::\";\n\n// \t\tassert_eq!(\n// \t\t\tresolve_reference(ctx, current_path, \"trivial\"),\n// \t\t\t\"crate::main::trivial\"\n// \t\t);\n\n// \t\tlet ctx = HashMap::from([\n// \t\t\t(\"main\"),\n// \t\t]);\n// \t\tlet current_path = \"crate::side::\";\n// \t}\n\n// \t// #[test]\n// \t// fn test_resolve_term_type() {\n// \t// \t// in a scope with nothing \"included\" from higher scopes, identifiers resolve to the name of this scope\n\n// \t// \tassert_eq!(\n// \t// \t\tresolve_term_type(\"MyType\", \"some_module\", {}),\n// \t// \t\t// if we refer to MyType while in some_module, and there aren't any references to that name, it must be local\n// \t// \t\tvec![\"some_module\", \"MyType\"],\n// \t// \t);\n\n// \t// \tassert_eq!(\n// \t// \t\tresolve_term_type(\"MyType\", \"some_module\", { \"MyType\": \"other_module\", \"WhateverType\": \"yo_module\" }),\n// \t// \t\t// if we refer to it while in some_module but something like a `use` introduced that name from another place, it's that place\n// \t// \t\tvec![\"other_module\", \"MyType\"],\n// \t// \t);\n// \t// }\n\n// \t// #[test]\n// \t// fn trivial_type_and_fn() {\n// \t// \t// prop trivial\n// \t// \t// thm give_trivial: trivial;\n// \t// \t// \treturn trivial\n\n// \t// \tlet program = vec![\n// \t// \t\tmake_trivial_prop(),\n// \t// \t\tmake_give_trivial_thm(),\n// \t// \t];\n\n// \t// \tassert!(type_check_program(program).is_some());\n\n// \t// \t// assert the whole program type checks\n// \t// \t// assert querying give_trivial returns trivial, resolved\n// \t// }\n\n// \t// #[test]\n// \t// fn things() {\n// \t// \t// model bool; true | false\n// \t// \t// prop trival\n// \t// \t// prop impossible; |\n\n// \t// \t// model Thing; a: A, b: B, other: Z\n\n// \t// \t// @(P, Q); prop And(P, Q)\n\n// \t// \t// @(P, Q)\n// \t// \t// prop And; (P, Q)\n\n// \t// \t// @(P, Q);\n// \t// \t// \tprop And; (P, Q)\n\n// \t// \t// prop And; (@P, @Q)\n\n// \t// \tlet and_type = TypeDefinition {\n// \t// \t\tname: \"And\".into(),\n// \t// \t\tkind: Kind::Prop,\n// \t// \t\tgenerics: vec![\n// \t// \t\t\tPattern{name: \"P\", type: None},\n// \t// \t\t\tPattern{name: \"Q\", type: None},\n// \t// \t\t],\n// \t// \t\tdefinition: TypeBody::Tuple(vec![bare(\"P\"), bare(\"Q\")]),\n// \t// \t};\n\n// \t// }\n// }\n\n// // prop Or; Left(@P) | Right(@Q)\n// // Or.Left\n// // Or/Left\n// // using slash as the \"namespace\" separator gives a nice similarity to modules and the filesystem\n// // that might be a bad thing! although Rust also blends the two by using :: for both\n// // honestly I think just `.` is the best, it's just the \"access namespace\" operator\n// // accessing the namespace of a struct is similar to accessing the namespace of a module or a type\n\n// // // anonymous unions\n// // alias A = 'a' | 'A' | int\n\n// // fn do_a = |: :match;\n// // \t'a' | 'A'; :do_something()\n// // \tint; :do_int_thing()\n\n// // `|` is for creating a function, `&` is for creating an assertion\n// // `|>` creates a function and catches, `|:` creates a function and chains\n// // TODO blaine what about return type annotation for anonymous functions?\n// // `&(n; , , )` creates an assertion and catches, `&( , , )` creates an assertion and chains (chains is the default)\n// // `&(n; )` creates an assertion and catches, `&` creates an assertion and chains (chains is the default)\n\n\n// // examples of the \"forall\" type\n// // @(a: A) -> Z\n// // @(a: A, b: B, inferred, d: D) -> Z\n\n// // examples of the \"simple function\" type\n// // (A) -> Z\n// // (A, B) -> Z\n\n// // there's no such thing as \"terms\" that only apply specifically for each of these,\n// // since there's *only* simple functions in the actual concrete language.\n\n// // \"theorems\" are functions which return things of kind Prop\n// // \"algorithms\" are functions which return things of kind Model, which may or may not have prop assertions on them\n\n// // \"functions\" are concrete and computational, have completely different rules\n\n\n\n\n// // TODO in general all of this path stuff should just reuse rustc functions, but I need something to play with for now\n\n// fn make_path_absolute(current_absolute_path: &str, possibly_relative_path: &str) -> String {\n// \tif possibly_relative_path.starts_with(\"crate\") {\n// \t\tpossibly_relative_path.to_owned()\n// \t}\n// \t// // TODO need to handle multiple \"super\" using a while loop?\n// \t// else if possibly_relative_path.startswith(\"super\") {\n// \t// \tlet trimmed_current_absolute_path = current_absolute_path.split(\"::\").skip_last().unwrap();\n// \t// \t// TODO can't just trim as many times as you want, have to count how many\n// \t// \tlet reduced_relative_path = possibly_relative_path.trim_start_matches(\"super::\");\n// \t// \tlet reduced_relative_path = reduced_relative_path.trim_start_matches(\"super\");\n// \t// \tformat!(\"{trimmed_current_absolute_path}::{reduced_relative_path}\", )\n// \t// }\n// \telse {\n// \t\tformat!(\"{current_absolute_path}::{possibly_relative_path}\")\n// \t}\n// }\n\n\n\n// // fn make_dir_module(dirname: String, modules: Vec<Module>) -> Module {\n// // \tModule { name: dirname, items: modules.into_iter.map(|m| ModuleItem::Module(m)) }\n// // }\n// // TODO make a function that walks a directory and recursively calls make_dir_module\n"
  },
  {
    "path": "old/inductive_serde.v",
    "content": "(*\n\tit seems possible to define a function that given an AST representing an inductive type is able to produce a pair of functions that can serialize/deserialize values of that inductive type to/from binary arrays\n\n\tthe cases that are especially interesting are:\n\t- any flat union (which should just be a single tag representing which variant the value is)\n\t- any type with exactly one unit variant and exactly one variant with only one recursive argument (which should basically have a tag/number at the beginning representing how many wrapped recursive constructors there are, and the rest of the array is any non-recursive wrapped data values) (the cases I have in mind here are natural numbers and lists)\n*)\n\nAdd LoadPath \"/home/blaine/lab/cpdtlib\" as Cpdt.\nSet Implicit Arguments. Set Asymmetric Patterns.\nRequire Import List String Cpdt.CpdtTactics.\nImport ListNotations.\nFrom stdpp Require Import base options stringmap.\n\n(*Definition yo: stringmap nat := {[ \"hey\" := 1; \"dude\" := 2 ]}.\nExample test_yo: (yo !!! \"hey\") = 1. Proof. reflexivity. Qed.\nExample test_yo': (yo !! \"hey\") = Some 1. Proof. reflexivity. Qed.\nExample test_yo'': (yo !!! \"nope\") = 0. Proof. reflexivity. Qed.*)\n\nInductive bit: Type := b0 | b1.\nNotation bit_array := (list bit).\n\nInductive TypeReference: Type := self_ref | other_ref (name: string).\n\nInductive ConstructorNode := constructor_node { constructor_args: list TypeReference }.\n\n(* TODO polymorphic args *)\nInductive InductiveType := inductive_node { constructors: stringmap ConstructorNode }.\n\nInductive InductiveValue := inductive_value {\n\tvalue_type: string;\n\tvalue_constructor: string;\n\tvalue_args: list InductiveValue\n}.\n\nNotation InductiveContext := (stringmap InductiveType).\n\n\n(* do we have to modify/reduce ctx as we go down? *)\nFixpoint\nValueOfType\n\t(ctx: InductiveContext) (value: InductiveValue) (type: InductiveType)\n\t{struct value}\n: Prop :=\n\tctx !! value.(value_type) = Some type\n\t/\\ exists constructor, type.(constructors) !! value.(value_constructor) = Some constructor\n\t/\\ ValuesOfTypes ctx value.(value_args) constructor.(constructor_args)\nwith\nValuesOfTypes\n\t(ctx: InductiveContext) (values: list InductiveValue) (type_refs: list TypeReference)\n\t{struct values}\n: Prop :=\n\tmatch values, type_refs with\n\t| [], [] => True\n\t| value :: values', type_ref :: type_refs' =>\n\t\tlet remainder_well_typed := (ValuesOfTypes ctx values' type_refs') in\n\t\tmatch type_ref with\n\t\t| self_ref => remainder_well_typed\n\t\t| other_ref other_name =>\n\t\t\texists other_type, ctx !! other_name = Some other_type /\\ ValueOfType ctx value other_type\n\t\t\t/\\ remainder_well_typed\n\t\tend\n\t| _, _ => False\n\tend\n.\n\n\n\nInductive Result (T: Type) (E: Type): Type :=\n\t| Ok (value: T)\n\t| Err (error: E).\n\nArguments Ok {T} {E} _.\nArguments Err {T} {E} _.\n\nNotation serializer T := (T -> bit_array).\nNotation deserializer T := (bit_array -> Result T string).\n\nFixpoint produce_serde_functions\n\t(node: InductiveType)\n: Result (serializer, deserializer) string :=\n\t.\n"
  },
  {
    "path": "old/machine.md",
    "content": "```\nmap_disjoint_sym: ∀ (K: Type) (M: Type → Type) (H0: ∀ A: Type, Lookup K A (M A)) (A: Type), Symmetric (map_disjoint: relation (M A))\nmap_disjoint_weaken_l: ∀ (K: Type) (M: Type → Type) (H0: ∀ A: Type, Lookup K A (M A)) (A: Type) (m1 m1' m2: M A), m1' ##ₘ m2 → m1 ⊆ m1' → m1 ##ₘ m2\nmap_disjoint_weaken_r: ∀ (K: Type) (M: Type → Type) (H0: ∀ A: Type, Lookup K A (M A)) (A: Type) (m1 m2 m2': M A), m1 ##ₘ m2' → m2 ⊆ m2' → m1 ##ₘ m2\nmap_disjoint_weaken: ∀ (K: Type) (M: Type → Type) (H0: ∀ A: Type, Lookup K A (M A)) (A: Type) (m1 m1' m2 m2': M A), m1' ##ₘ m2' → m1 ⊆ m1' → m2 ⊆ m2' → m1 ##ₘ m2\nmap_disjoint_Some_r: ∀ (K: Type) (M: Type → Type) (H0: ∀ A: Type, Lookup K A (M A)) (A: Type) (m1 m2: M A) (i: K) (x: A), m1 ##ₘ m2 → m2 !! i = Some x → m1 !! i = None\nmap_disjoint_Some_l: ∀ (K: Type) (M: Type → Type) (H0: ∀ A: Type, Lookup K A (M A)) (A: Type) (m1 m2: M A) (i: K) (x: A), m1 ##ₘ m2 → m1 !! i = Some x → m2 !! i = None\nmap_disjoint_proper: ∀ (K: Type) (M: Type → Type) (H0: ∀ A: Type, Lookup K A (M A)) (A: Type) (H7: Equiv A), Proper (equiv ==> equiv ==> iff) map_disjoint\nmap_disjoint_alt: ∀ (K: Type) (M: Type → Type) (H0: ∀ A: Type, Lookup K A (M A)) (A: Type) (m1 m2: M A), m1 ##ₘ m2 ↔ (∀ i: K, m1 !! i = None ∨ m2 !! i = None)\nmap_disjoint_spec:\n  ∀ (K: Type) (M: Type → Type) (H0: ∀ A: Type, Lookup K A (M A)) (A: Type) (m1 m2: M A), m1 ##ₘ m2 ↔ (∀ (i: K) (x y: A), m1 !! i = Some x → m2 !! i = Some y → False)\nmap_disjoint_empty_r:\n  ∀ (K: Type) (M: Type → Type) (H: FMap M) (H0: ∀ A: Type, Lookup K A (M A)) (H1: ∀ A: Type, Empty (M A)) (H2: ∀ A: Type, PartialAlter K A (M A))\n    (H3: OMap M) (H4: Merge M) (H5: ∀ A: Type, FinMapToList K A (M A)) (EqDecision0: EqDecision K), FinMap K M → ∀ (A: Type) (m: M A), m ##ₘ ∅\nmap_disjoint_empty_l:\n  ∀ (K: Type) (M: Type → Type) (H: FMap M) (H0: ∀ A: Type, Lookup K A (M A)) (H1: ∀ A: Type, Empty (M A)) (H2: ∀ A: Type, PartialAlter K A (M A))\n    (H3: OMap M) (H4: Merge M) (H5: ∀ A: Type, FinMapToList K A (M A)) (EqDecision0: EqDecision K), FinMap K M → ∀ (A: Type) (m: M A), ∅ ##ₘ m\nmap_disjoint_delete_l:\n  ∀ (K: Type) (M: Type → Type) (H: FMap M) (H0: ∀ A: Type, Lookup K A (M A)) (H1: ∀ A: Type, Empty (M A)) (H2: ∀ A: Type, PartialAlter K A (M A))\n    (H3: OMap M) (H4: Merge M) (H5: ∀ A: Type, FinMapToList K A (M A)) (EqDecision0: EqDecision K),\n    FinMap K M → ∀ (A: Type) (m1 m2: M A) (i: K), m1 ##ₘ m2 → delete i m1 ##ₘ m2\nmap_disjoint_delete_r:\n  ∀ (K: Type) (M: Type → Type) (H: FMap M) (H0: ∀ A: Type, Lookup K A (M A)) (H1: ∀ A: Type, Empty (M A)) (H2: ∀ A: Type, PartialAlter K A (M A))\n    (H3: OMap M) (H4: Merge M) (H5: ∀ A: Type, FinMapToList K A (M A)) (EqDecision0: EqDecision K),\n    FinMap K M → ∀ (A: Type) (m1 m2: M A) (i: K), m1 ##ₘ m2 → m1 ##ₘ delete i m2\nmap_union_comm:\n  ∀ (K: Type) (M: Type → Type) (H: FMap M) (H0: ∀ A: Type, Lookup K A (M A)) (H1: ∀ A: Type, Empty (M A)) (H2: ∀ A: Type, PartialAlter K A (M A))\n    (H3: OMap M) (H4: Merge M) (H5: ∀ A: Type, FinMapToList K A (M A)) (EqDecision0: EqDecision K), FinMap K M → ∀ (A: Type) (m1 m2: M A), m1 ##ₘ m2 → m1 ∪ m2 = m2 ∪ m1\nmap_disjoint_difference_r:\n  ∀ (K: Type) (M: Type → Type) (H: FMap M) (H0: ∀ A: Type, Lookup K A (M A)) (H1: ∀ A: Type, Empty (M A)) (H2: ∀ A: Type, PartialAlter K A (M A))\n    (H3: OMap M) (H4: Merge M) (H5: ∀ A: Type, FinMapToList K A (M A)) (EqDecision0: EqDecision K), FinMap K M → ∀ (A: Type) (m1 m2: M A), m1 ⊆ m2 → m1 ##ₘ m2 ∖ m1\nmap_disjoint_difference_l:\n  ∀ (K: Type) (M: Type → Type) (H: FMap M) (H0: ∀ A: Type, Lookup K A (M A)) (H1: ∀ A: Type, Empty (M A)) (H2: ∀ A: Type, PartialAlter K A (M A))\n    (H3: OMap M) (H4: Merge M) (H5: ∀ A: Type, FinMapToList K A (M A)) (EqDecision0: EqDecision K), FinMap K M → ∀ (A: Type) (m1 m2: M A), m1 ⊆ m2 → m2 ∖ m1 ##ₘ m1\nmap_union_subseteq_r:\n  ∀ (K: Type) (M: Type → Type) (H: FMap M) (H0: ∀ A: Type, Lookup K A (M A)) (H1: ∀ A: Type, Empty (M A)) (H2: ∀ A: Type, PartialAlter K A (M A))\n    (H3: OMap M) (H4: Merge M) (H5: ∀ A: Type, FinMapToList K A (M A)) (EqDecision0: EqDecision K), FinMap K M → ∀ (A: Type) (m1 m2: M A), m1 ##ₘ m2 → m2 ⊆ m1 ∪ m2\nmap_disjoint_union_l_2:\n  ∀ (K: Type) (M: Type → Type) (H: FMap M) (H0: ∀ A: Type, Lookup K A (M A)) (H1: ∀ A: Type, Empty (M A)) (H2: ∀ A: Type, PartialAlter K A (M A))\n    (H3: OMap M) (H4: Merge M) (H5: ∀ A: Type, FinMapToList K A (M A)) (EqDecision0: EqDecision K),\n    FinMap K M → ∀ (A: Type) (m1 m2 m3: M A), m1 ##ₘ m3 → m2 ##ₘ m3 → m1 ∪ m2 ##ₘ m3\nmap_disjoint_union_r_2:\n  ∀ (K: Type) (M: Type → Type) (H: FMap M) (H0: ∀ A: Type, Lookup K A (M A)) (H1: ∀ A: Type, Empty (M A)) (H2: ∀ A: Type, PartialAlter K A (M A))\n    (H3: OMap M) (H4: Merge M) (H5: ∀ A: Type, FinMapToList K A (M A)) (EqDecision0: EqDecision K),\n    FinMap K M → ∀ (A: Type) (m1 m2 m3: M A), m1 ##ₘ m2 → m1 ##ₘ m3 → m1 ##ₘ m2 ∪ m3\nmap_disjoint_fmap:\n  ∀ (K: Type) (M: Type → Type) (H: FMap M) (H0: ∀ A: Type, Lookup K A (M A)) (H1: ∀ A: Type, Empty (M A)) (H2: ∀ A: Type, PartialAlter K A (M A))\n    (H3: OMap M) (H4: Merge M) (H5: ∀ A: Type, FinMapToList K A (M A)) (EqDecision0: EqDecision K),\n    FinMap K M → ∀ (A B: Type) (f1 f2: A → B) (m1 m2: M A), m1 ##ₘ m2 ↔ f1 <$> m1 ##ₘ f2 <$> m2\nmap_disjoint_omap:\n  ∀ (K: Type) (M: Type → Type) (H: FMap M) (H0: ∀ A: Type, Lookup K A (M A)) (H1: ∀ A: Type, Empty (M A)) (H2: ∀ A: Type, PartialAlter K A (M A))\n    (H3: OMap M) (H4: Merge M) (H5: ∀ A: Type, FinMapToList K A (M A)) (EqDecision0: EqDecision K),\n    FinMap K M → ∀ (A B: Type) (f1 f2: A → option B) (m1 m2: M A), m1 ##ₘ m2 → omap f1 m1 ##ₘ omap f2 m2\nmap_disjoint_union_list_l_2:\n  ∀ (K: Type) (M: Type → Type) (H: FMap M) (H0: ∀ A: Type, Lookup K A (M A)) (H1: ∀ A: Type, Empty (M A)) (H2: ∀ A: Type, PartialAlter K A (M A))\n    (H3: OMap M) (H4: Merge M) (H5: ∀ A: Type, FinMapToList K A (M A)) (EqDecision0: EqDecision K),\n    FinMap K M → ∀ (A: Type) (ms: list (M A)) (m: M A), Forall (λ m2: M A, m2 ##ₘ m) ms → ⋃ ms ##ₘ m\nmap_disjoint_union_list_r_2:\n  ∀ (K: Type) (M: Type → Type) (H: FMap M) (H0: ∀ A: Type, Lookup K A (M A)) (H1: ∀ A: Type, Empty (M A)) (H2: ∀ A: Type, PartialAlter K A (M A))\n    (H3: OMap M) (H4: Merge M) (H5: ∀ A: Type, FinMapToList K A (M A)) (EqDecision0: EqDecision K),\n    FinMap K M → ∀ (A: Type) (ms: list (M A)) (m: M A), Forall (λ m2: M A, m2 ##ₘ m) ms → m ##ₘ ⋃ ms\nmap_disjoint_union_l:\n  ∀ (K: Type) (M: Type → Type) (H: FMap M) (H0: ∀ A: Type, Lookup K A (M A)) (H1: ∀ A: Type, Empty (M A)) (H2: ∀ A: Type, PartialAlter K A (M A))\n    (H3: OMap M) (H4: Merge M) (H5: ∀ A: Type, FinMapToList K A (M A)) (EqDecision0: EqDecision K),\n    FinMap K M → ∀ (A: Type) (m1 m2 m3: M A), m1 ∪ m2 ##ₘ m3 ↔ m1 ##ₘ m3 ∧ m2 ##ₘ m3\nmap_disjoint_union_r:\n  ∀ (K: Type) (M: Type → Type) (H: FMap M) (H0: ∀ A: Type, Lookup K A (M A)) (H1: ∀ A: Type, Empty (M A)) (H2: ∀ A: Type, PartialAlter K A (M A))\n    (H3: OMap M) (H4: Merge M) (H5: ∀ A: Type, FinMapToList K A (M A)) (EqDecision0: EqDecision K),\n    FinMap K M → ∀ (A: Type) (m1 m2 m3: M A), m1 ##ₘ m2 ∪ m3 ↔ m1 ##ₘ m2 ∧ m1 ##ₘ m3\nmap_disjoint_foldr_delete_l:\n  ∀ (K: Type) (M: Type → Type) (H: FMap M) (H0: ∀ A: Type, Lookup K A (M A)) (H1: ∀ A: Type, Empty (M A)) (H2: ∀ A: Type, PartialAlter K A (M A))\n    (H3: OMap M) (H4: Merge M) (H5: ∀ A: Type, FinMapToList K A (M A)) (EqDecision0: EqDecision K),\n    FinMap K M → ∀ (A: Type) (m1 m2: M A) (is: list K), m1 ##ₘ m2 → foldr delete m1 is ##ₘ m2\nmap_disjoint_foldr_delete_r:\n  ∀ (K: Type) (M: Type → Type) (H: FMap M) (H0: ∀ A: Type, Lookup K A (M A)) (H1: ∀ A: Type, Empty (M A)) (H2: ∀ A: Type, PartialAlter K A (M A))\n    (H3: OMap M) (H4: Merge M) (H5: ∀ A: Type, FinMapToList K A (M A)) (EqDecision0: EqDecision K),\n    FinMap K M → ∀ (A: Type) (m1 m2: M A) (is: list K), m1 ##ₘ m2 → m1 ##ₘ foldr delete m2 is\nmap_disjoint_union_list_l:\n  ∀ (K: Type) (M: Type → Type) (H: FMap M) (H0: ∀ A: Type, Lookup K A (M A)) (H1: ∀ A: Type, Empty (M A)) (H2: ∀ A: Type, PartialAlter K A (M A))\n    (H3: OMap M) (H4: Merge M) (H5: ∀ A: Type, FinMapToList K A (M A)) (EqDecision0: EqDecision K),\n    FinMap K M → ∀ (A: Type) (ms: list (M A)) (m: M A), ⋃ ms ##ₘ m ↔ Forall (λ m2: M A, m2 ##ₘ m) ms\nmap_disjoint_union_list_r:\n  ∀ (K: Type) (M: Type → Type) (H: FMap M) (H0: ∀ A: Type, Lookup K A (M A)) (H1: ∀ A: Type, Empty (M A)) (H2: ∀ A: Type, PartialAlter K A (M A))\n    (H3: OMap M) (H4: Merge M) (H5: ∀ A: Type, FinMapToList K A (M A)) (EqDecision0: EqDecision K),\n    FinMap K M → ∀ (A: Type) (ms: list (M A)) (m: M A), m ##ₘ ⋃ ms ↔ Forall (λ m2: M A, m2 ##ₘ m) ms\nmap_Forall_union_1_2:\n  ∀ (K: Type) (M: Type → Type) (H: FMap M) (H0: ∀ A: Type, Lookup K A (M A)) (H1: ∀ A: Type, Empty (M A)) (H2: ∀ A: Type, PartialAlter K A (M A))\n    (H3: OMap M) (H4: Merge M) (H5: ∀ A: Type, FinMapToList K A (M A)) (EqDecision0: EqDecision K),\n    FinMap K M → ∀ (A: Type) (m1 m2: M A) (P: K → A → Prop), m1 ##ₘ m2 → map_Forall P (m1 ∪ m2) → map_Forall P m2\nmap_union_subseteq_r':\n  ∀ (K: Type) (M: Type → Type) (H: FMap M) (H0: ∀ A: Type, Lookup K A (M A)) (H1: ∀ A: Type, Empty (M A)) (H2: ∀ A: Type, PartialAlter K A (M A))\n    (H3: OMap M) (H4: Merge M) (H5: ∀ A: Type, FinMapToList K A (M A)) (EqDecision0: EqDecision K), FinMap K M → ∀ (A: Type) (m1 m2 m3: M A), m2 ##ₘ m3 → m1 ⊆ m3 → m1 ⊆ m2 ∪ m3\nmap_disjoint_singleton_l_2:\n  ∀ (K: Type) (M: Type → Type) (H: FMap M) (H0: ∀ A: Type, Lookup K A (M A)) (H1: ∀ A: Type, Empty (M A)) (H2: ∀ A: Type, PartialAlter K A (M A))\n    (H3: OMap M) (H4: Merge M) (H5: ∀ A: Type, FinMapToList K A (M A)) (EqDecision0: EqDecision K),\n    FinMap K M → ∀ (A: Type) (m: M A) (i: K) (x: A), m !! i = None → {[i := x]} ##ₘ m\nmap_disjoint_singleton_r_2:\n  ∀ (K: Type) (M: Type → Type) (H: FMap M) (H0: ∀ A: Type, Lookup K A (M A)) (H1: ∀ A: Type, Empty (M A)) (H2: ∀ A: Type, PartialAlter K A (M A))\n    (H3: OMap M) (H4: Merge M) (H5: ∀ A: Type, FinMapToList K A (M A)) (EqDecision0: EqDecision K),\n    FinMap K M → ∀ (A: Type) (m: M A) (i: K) (x: A), m !! i = None → m ##ₘ {[i := x]}\nmap_union_cancel_l:\n  ∀ (K: Type) (M: Type → Type) (H: FMap M) (H0: ∀ A: Type, Lookup K A (M A)) (H1: ∀ A: Type, Empty (M A)) (H2: ∀ A: Type, PartialAlter K A (M A))\n    (H3: OMap M) (H4: Merge M) (H5: ∀ A: Type, FinMapToList K A (M A)) (EqDecision0: EqDecision K),\n    FinMap K M → ∀ (A: Type) (m1 m2 m3: M A), m1 ##ₘ m3 → m2 ##ₘ m3 → m3 ∪ m1 = m3 ∪ m2 → m1 = m2\nmap_union_cancel_r:\n  ∀ (K: Type) (M: Type → Type) (H: FMap M) (H0: ∀ A: Type, Lookup K A (M A)) (H1: ∀ A: Type, Empty (M A)) (H2: ∀ A: Type, PartialAlter K A (M A))\n    (H3: OMap M) (H4: Merge M) (H5: ∀ A: Type, FinMapToList K A (M A)) (EqDecision0: EqDecision K),\n    FinMap K M → ∀ (A: Type) (m1 m2 m3: M A), m1 ##ₘ m3 → m2 ##ₘ m3 → m1 ∪ m3 = m2 ∪ m3 → m1 = m2\nmap_disjoint_singleton_l:\n  ∀ (K: Type) (M: Type → Type) (H: FMap M) (H0: ∀ A: Type, Lookup K A (M A)) (H1: ∀ A: Type, Empty (M A)) (H2: ∀ A: Type, PartialAlter K A (M A))\n    (H3: OMap M) (H4: Merge M) (H5: ∀ A: Type, FinMapToList K A (M A)) (EqDecision0: EqDecision K),\n    FinMap K M → ∀ (A: Type) (m: M A) (i: K) (x: A), {[i := x]} ##ₘ m ↔ m !! i = None\nmap_disjoint_singleton_r:\n  ∀ (K: Type) (M: Type → Type) (H: FMap M) (H0: ∀ A: Type, Lookup K A (M A)) (H1: ∀ A: Type, Empty (M A)) (H2: ∀ A: Type, PartialAlter K A (M A))\n    (H3: OMap M) (H4: Merge M) (H5: ∀ A: Type, FinMapToList K A (M A)) (EqDecision0: EqDecision K),\n    FinMap K M → ∀ (A: Type) (m: M A) (i: K) (x: A), m ##ₘ {[i := x]} ↔ m !! i = None\nmap_seq_app_disjoint:\n  ∀ (M: Type → Type) (H: FMap M) (H0: ∀ A: Type, Lookup nat A (M A)) (H1: ∀ A: Type, Empty (M A)) (H2: ∀ A: Type, PartialAlter nat A (M A)) (H3: OMap M)\n    (H4: Merge M) (H5: ∀ A: Type, FinMapToList nat A (M A)) (EqDecision0: EqDecision nat),\n    FinMap nat M → ∀ (A: Type) (start: nat) (xs1 xs2: list A), map_seq start xs1 ##ₘ map_seq (start + length xs1) xs2\nmap_union_mono_r:\n  ∀ (K: Type) (M: Type → Type) (H: FMap M) (H0: ∀ A: Type, Lookup K A (M A)) (H1: ∀ A: Type, Empty (M A)) (H2: ∀ A: Type, PartialAlter K A (M A))\n    (H3: OMap M) (H4: Merge M) (H5: ∀ A: Type, FinMapToList K A (M A)) (EqDecision0: EqDecision K),\n    FinMap K M → ∀ (A: Type) (m1 m2 m3: M A), m2 ##ₘ m3 → m1 ⊆ m2 → m1 ∪ m3 ⊆ m2 ∪ m3\nmap_disjoint_insert_l_2:\n  ∀ (K: Type) (M: Type → Type) (H: FMap M) (H0: ∀ A: Type, Lookup K A (M A)) (H1: ∀ A: Type, Empty (M A)) (H2: ∀ A: Type, PartialAlter K A (M A))\n    (H3: OMap M) (H4: Merge M) (H5: ∀ A: Type, FinMapToList K A (M A)) (EqDecision0: EqDecision K),\n    FinMap K M → ∀ (A: Type) (m1 m2: M A) (i: K) (x: A), m2 !! i = None → m1 ##ₘ m2 → <[i:=x]> m1 ##ₘ m2\nmap_disjoint_insert_r_2:\n  ∀ (K: Type) (M: Type → Type) (H: FMap M) (H0: ∀ A: Type, Lookup K A (M A)) (H1: ∀ A: Type, Empty (M A)) (H2: ∀ A: Type, PartialAlter K A (M A))\n    (H3: OMap M) (H4: Merge M) (H5: ∀ A: Type, FinMapToList K A (M A)) (EqDecision0: EqDecision K),\n    FinMap K M → ∀ (A: Type) (m1 m2: M A) (i: K) (x: A), m1 !! i = None → m1 ##ₘ m2 → m1 ##ₘ <[i:=x]> m2\nmap_omap_union:\n  ∀ (K: Type) (M: Type → Type) (H: FMap M) (H0: ∀ A: Type, Lookup K A (M A)) (H1: ∀ A: Type, Empty (M A)) (H2: ∀ A: Type, PartialAlter K A (M A))\n    (H3: OMap M) (H4: Merge M) (H5: ∀ A: Type, FinMapToList K A (M A)) (EqDecision0: EqDecision K),\n    FinMap K M → ∀ (A B: Type) (f: A → option B) (m1 m2: M A), m1 ##ₘ m2 → omap f (m1 ∪ m2) = omap f m1 ∪ omap f m2\nmap_Forall_union:\n  ∀ (K: Type) (M: Type → Type) (H: FMap M) (H0: ∀ A: Type, Lookup K A (M A)) (H1: ∀ A: Type, Empty (M A)) (H2: ∀ A: Type, PartialAlter K A (M A))\n    (H3: OMap M) (H4: Merge M) (H5: ∀ A: Type, FinMapToList K A (M A)) (EqDecision0: EqDecision K),\n    FinMap K M → ∀ (A: Type) (m1 m2: M A) (P: K → A → Prop), m1 ##ₘ m2 → map_Forall P (m1 ∪ m2) ↔ map_Forall P m1 ∧ map_Forall P m2\nlookup_union_Some_r:\n  ∀ (K: Type) (M: Type → Type) (H: FMap M) (H0: ∀ A: Type, Lookup K A (M A)) (H1: ∀ A: Type, Empty (M A)) (H2: ∀ A: Type, PartialAlter K A (M A))\n    (H3: OMap M) (H4: Merge M) (H5: ∀ A: Type, FinMapToList K A (M A)) (EqDecision0: EqDecision K),\n    FinMap K M → ∀ (A: Type) (m1 m2: M A) (i: K) (x: A), m1 ##ₘ m2 → m2 !! i = Some x → (m1 ∪ m2) !! i = Some x\nmap_union_reflecting_r:\n  ∀ (K: Type) (M: Type → Type) (H: FMap M) (H0: ∀ A: Type, Lookup K A (M A)) (H1: ∀ A: Type, Empty (M A)) (H2: ∀ A: Type, PartialAlter K A (M A))\n    (H3: OMap M) (H4: Merge M) (H5: ∀ A: Type, FinMapToList K A (M A)) (EqDecision0: EqDecision K),\n    FinMap K M → ∀ (A: Type) (m1 m2 m3: M A), m1 ##ₘ m3 → m2 ##ₘ m3 → m1 ∪ m3 ⊆ m2 ∪ m3 → m1 ⊆ m2\nmap_union_reflecting_l:\n  ∀ (K: Type) (M: Type → Type) (H: FMap M) (H0: ∀ A: Type, Lookup K A (M A)) (H1: ∀ A: Type, Empty (M A)) (H2: ∀ A: Type, PartialAlter K A (M A))\n    (H3: OMap M) (H4: Merge M) (H5: ∀ A: Type, FinMapToList K A (M A)) (EqDecision0: EqDecision K),\n    FinMap K M → ∀ (A: Type) (m1 m2 m3: M A), m3 ##ₘ m1 → m3 ##ₘ m2 → m3 ∪ m1 ⊆ m3 ∪ m2 → m1 ⊆ m2\nmap_disjoint_insert_r:\n  ∀ (K: Type) (M: Type → Type) (H: FMap M) (H0: ∀ A: Type, Lookup K A (M A)) (H1: ∀ A: Type, Empty (M A)) (H2: ∀ A: Type, PartialAlter K A (M A))\n    (H3: OMap M) (H4: Merge M) (H5: ∀ A: Type, FinMapToList K A (M A)) (EqDecision0: EqDecision K),\n    FinMap K M → ∀ (A: Type) (m1 m2: M A) (i: K) (x: A), m1 ##ₘ <[i:=x]> m2 ↔ m1 !! i = None ∧ m1 ##ₘ m2\nmap_disjoint_insert_l:\n  ∀ (K: Type) (M: Type → Type) (H: FMap M) (H0: ∀ A: Type, Lookup K A (M A)) (H1: ∀ A: Type, Empty (M A)) (H2: ∀ A: Type, PartialAlter K A (M A))\n    (H3: OMap M) (H4: Merge M) (H5: ∀ A: Type, FinMapToList K A (M A)) (EqDecision0: EqDecision K),\n    FinMap K M → ∀ (A: Type) (m1 m2: M A) (i: K) (x: A), <[i:=x]> m1 ##ₘ m2 ↔ m2 !! i = None ∧ m1 ##ₘ m2\nmap_size_disj_union:\n  ∀ (K: Type) (M: Type → Type) (H: FMap M) (H0: ∀ A: Type, Lookup K A (M A)) (H1: ∀ A: Type, Empty (M A)) (H2: ∀ A: Type, PartialAlter K A (M A))\n    (H3: OMap M) (H4: Merge M) (H5: ∀ A: Type, FinMapToList K A (M A)) (EqDecision0: EqDecision K),\n    FinMap K M → ∀ (A: Type) (m1 m2: M A), m1 ##ₘ m2 → base.size (m1 ∪ m2) = base.size m1 + base.size m2\nmap_not_disjoint:\n  ∀ (K: Type) (M: Type → Type) (H: FMap M) (H0: ∀ A: Type, Lookup K A (M A)) (H1: ∀ A: Type, Empty (M A)) (H2: ∀ A: Type, PartialAlter K A (M A))\n    (H3: OMap M) (H4: Merge M) (H5: ∀ A: Type, FinMapToList K A (M A)) (EqDecision0: EqDecision K),\n    FinMap K M → ∀ (A: Type) (m1 m2: M A), ¬ m1 ##ₘ m2 ↔ (∃ (i: K) (x1 x2: A), m1 !! i = Some x1 ∧ m2 !! i = Some x2)\nmap_disjoint_list_to_map_r:\n  ∀ (K: Type) (M: Type → Type) (H: FMap M) (H0: ∀ A: Type, Lookup K A (M A)) (H1: ∀ A: Type, Empty (M A)) (H2: ∀ A: Type, PartialAlter K A (M A))\n    (H3: OMap M) (H4: Merge M) (H5: ∀ A: Type, FinMapToList K A (M A)) (EqDecision0: EqDecision K),\n    FinMap K M → ∀ (A: Type) (m: M A) (ixs: list (K * A)), m ##ₘ list_to_map ixs ↔ Forall (λ ix: K * A, m !! ix.1 = None) ixs\nmap_disjoint_list_to_map_l:\n  ∀ (K: Type) (M: Type → Type) (H: FMap M) (H0: ∀ A: Type, Lookup K A (M A)) (H1: ∀ A: Type, Empty (M A)) (H2: ∀ A: Type, PartialAlter K A (M A))\n    (H3: OMap M) (H4: Merge M) (H5: ∀ A: Type, FinMapToList K A (M A)) (EqDecision0: EqDecision K),\n    FinMap K M → ∀ (A: Type) (m: M A) (ixs: list (K * A)), list_to_map ixs ##ₘ m ↔ Forall (λ ix: K * A, m !! ix.1 = None) ixs\nmap_disjoint_dom_2:\n  ∀ (K: Type) (M: Type → Type) (D: Type) (H: ∀ A: Type, Dom (M A) D) (H0: FMap M) (H1: ∀ A: Type, Lookup K A (M A)) (H2: ∀ A: Type, Empty (M A))\n    (H3: ∀ A: Type, PartialAlter K A (M A)) (H4: OMap M) (H5: Merge M) (H6: ∀ A: Type, FinMapToList K A (M A)) (EqDecision0: EqDecision K) (H7: ElemOf K D)\n    (H8: Empty D) (H9: Singleton K D) (H10: Union D) (H11: Intersection D) (H12: Difference D), FinMapDom K M D → ∀ (A: Type) (m1 m2: M A), dom D m1 ## dom D m2 → m1 ##ₘ m2\nmap_disjoint_dom_1:\n  ∀ (K: Type) (M: Type → Type) (D: Type) (H: ∀ A: Type, Dom (M A) D) (H0: FMap M) (H1: ∀ A: Type, Lookup K A (M A)) (H2: ∀ A: Type, Empty (M A))\n    (H3: ∀ A: Type, PartialAlter K A (M A)) (H4: OMap M) (H5: Merge M) (H6: ∀ A: Type, FinMapToList K A (M A)) (EqDecision0: EqDecision K) (H7: ElemOf K D)\n    (H8: Empty D) (H9: Singleton K D) (H10: Union D) (H11: Intersection D) (H12: Difference D), FinMapDom K M D → ∀ (A: Type) (m1 m2: M A), m1 ##ₘ m2 → dom D m1 ## dom D m2\nlookup_union_Some:\n  ∀ (K: Type) (M: Type → Type) (H: FMap M) (H0: ∀ A: Type, Lookup K A (M A)) (H1: ∀ A: Type, Empty (M A)) (H2: ∀ A: Type, PartialAlter K A (M A))\n    (H3: OMap M) (H4: Merge M) (H5: ∀ A: Type, FinMapToList K A (M A)) (EqDecision0: EqDecision K),\n    FinMap K M → ∀ (A: Type) (m1 m2: M A) (i: K) (x: A), m1 ##ₘ m2 → (m1 ∪ m2) !! i = Some x ↔ m1 !! i = Some x ∨ m2 !! i = Some x\nmap_disjoint_dom:\n  ∀ (K: Type) (M: Type → Type) (D: Type) (H: ∀ A: Type, Dom (M A) D) (H0: FMap M) (H1: ∀ A: Type, Lookup K A (M A)) (H2: ∀ A: Type, Empty (M A))\n    (H3: ∀ A: Type, PartialAlter K A (M A)) (H4: OMap M) (H5: Merge M) (H6: ∀ A: Type, FinMapToList K A (M A)) (EqDecision0: EqDecision K) (H7: ElemOf K D)\n    (H8: Empty D) (H9: Singleton K D) (H10: Union D) (H11: Intersection D) (H12: Difference D), FinMapDom K M D → ∀ (A: Type) (m1 m2: M A), m1 ##ₘ m2 ↔ dom D m1 ## dom D m2\nmap_disjoint_filter:\n  ∀ (K: Type) (M: Type → Type) (H: FMap M) (H0: ∀ A: Type, Lookup K A (M A)) (H1: ∀ A: Type, Empty (M A)) (H2: ∀ A: Type, PartialAlter K A (M A))\n    (H3: OMap M) (H4: Merge M) (H5: ∀ A: Type, FinMapToList K A (M A)) (EqDecision0: EqDecision K),\n    FinMap K M → ∀ (A: Type) (P: K * A → Prop) (H7: ∀ x: K * A, Decision (P x)) (m1 m2: M A), m1 ##ₘ m2 → filter P m1 ##ₘ filter P m2\nmap_disjoint_list_to_map_zip_l_2:\n  ∀ (K: Type) (M: Type → Type) (H: FMap M) (H0: ∀ A: Type, Lookup K A (M A)) (H1: ∀ A: Type, Empty (M A)) (H2: ∀ A: Type, PartialAlter K A (M A))\n    (H3: OMap M) (H4: Merge M) (H5: ∀ A: Type, FinMapToList K A (M A)) (EqDecision0: EqDecision K),\n    FinMap K M → ∀ (A: Type) (m: M A) (is: list K) (xs: list A), length is = length xs → Forall (λ i: K, m !! i = None) is → list_to_map (zip is xs) ##ₘ m\nmap_disjoint_list_to_map_zip_r_2:\n  ∀ (K: Type) (M: Type → Type) (H: FMap M) (H0: ∀ A: Type, Lookup K A (M A)) (H1: ∀ A: Type, Empty (M A)) (H2: ∀ A: Type, PartialAlter K A (M A))\n    (H3: OMap M) (H4: Merge M) (H5: ∀ A: Type, FinMapToList K A (M A)) (EqDecision0: EqDecision K),\n    FinMap K M → ∀ (A: Type) (m: M A) (is: list K) (xs: list A), length is = length xs → Forall (λ i: K, m !! i = None) is → m ##ₘ list_to_map (zip is xs)\nmap_disjoint_list_to_map_zip_l:\n  ∀ (K: Type) (M: Type → Type) (H: FMap M) (H0: ∀ A: Type, Lookup K A (M A)) (H1: ∀ A: Type, Empty (M A)) (H2: ∀ A: Type, PartialAlter K A (M A))\n    (H3: OMap M) (H4: Merge M) (H5: ∀ A: Type, FinMapToList K A (M A)) (EqDecision0: EqDecision K),\n    FinMap K M → ∀ (A: Type) (m: M A) (is: list K) (xs: list A), length is = length xs → list_to_map (zip is xs) ##ₘ m ↔ Forall (λ i: K, m !! i = None) is\nmap_disjoint_list_to_map_zip_r:\n  ∀ (K: Type) (M: Type → Type) (H: FMap M) (H0: ∀ A: Type, Lookup K A (M A)) (H1: ∀ A: Type, Empty (M A)) (H2: ∀ A: Type, PartialAlter K A (M A))\n    (H3: OMap M) (H4: Merge M) (H5: ∀ A: Type, FinMapToList K A (M A)) (EqDecision0: EqDecision K),\n    FinMap K M → ∀ (A: Type) (m: M A) (is: list K) (xs: list A), length is = length xs → m ##ₘ list_to_map (zip is xs) ↔ Forall (λ i: K, m !! i = None) is\nmap_disjoint_filter_complement:\n  ∀ (K: Type) (M: Type → Type) (H: FMap M) (H0: ∀ A: Type, Lookup K A (M A)) (H1: ∀ A: Type, Empty (M A)) (H2: ∀ A: Type, PartialAlter K A (M A))\n    (H3: OMap M) (H4: Merge M) (H5: ∀ A: Type, FinMapToList K A (M A)) (EqDecision0: EqDecision K),\n    FinMap K M → ∀ (A: Type) (P: K * A → Prop) (H7: ∀ x: K * A, Decision (P x)) (m: M A), filter P m ##ₘ filter (λ v: K * A, ¬ P v) m\nmap_disjoint_kmap:\n  ∀ (K1: Type) (M1: Type → Type) (H: FMap M1) (H0: ∀ A: Type, Lookup K1 A (M1 A)) (H1: ∀ A: Type, Empty (M1 A)) (H2: ∀ A: Type, PartialAlter K1 A (M1 A))\n    (H3: OMap M1) (H4: Merge M1) (H5: ∀ A: Type, FinMapToList K1 A (M1 A)) (EqDecision0: EqDecision K1),\n    FinMap K1 M1\n    → ∀ (K2: Type) (M2: Type → Type) (H7: FMap M2) (H8: ∀ A: Type, Lookup K2 A (M2 A)) (H9: ∀ A: Type, Empty (M2 A)) (H10: ∀ A: Type, PartialAlter K2 A (M2 A))\n        (H11: OMap M2) (H12: Merge M2) (H13: ∀ A: Type, FinMapToList K2 A (M2 A)) (EqDecision1: EqDecision K2),\n        FinMap K2 M2 → ∀ f: K1 → K2, Inj eq eq f → ∀ (A: Type) (m1 m2: M1 A), kmap f m1 ##ₘ kmap f m2 ↔ m1 ##ₘ m2\nmap_filter_union:\n  ∀ (K: Type) (M: Type → Type) (H: FMap M) (H0: ∀ A: Type, Lookup K A (M A)) (H1: ∀ A: Type, Empty (M A)) (H2: ∀ A: Type, PartialAlter K A (M A))\n    (H3: OMap M) (H4: Merge M) (H5: ∀ A: Type, FinMapToList K A (M A)) (EqDecision0: EqDecision K),\n    FinMap K M → ∀ (A: Type) (P: K * A → Prop) (H7: ∀ x: K * A, Decision (P x)) (m1 m2: M A), m1 ##ₘ m2 → filter P (m1 ∪ m2) = filter P m1 ∪ filter P m2\ndom_union_inv_L:\n  ∀ (K: Type) (M: Type → Type) (D: Type) (H: ∀ A: Type, Dom (M A) D) (H0: FMap M) (H1: ∀ A: Type, Lookup K A (M A)) (H2: ∀ A: Type, Empty (M A))\n    (H3: ∀ A: Type, PartialAlter K A (M A)) (H4: OMap M) (H5: Merge M) (H6: ∀ A: Type, FinMapToList K A (M A)) (EqDecision0: EqDecision K) (H7: ElemOf K D)\n    (H8: Empty D) (H9: Singleton K D) (H10: Union D) (H11: Intersection D) (H12: Difference D),\n    FinMapDom K M D\n    → LeibnizEquiv D → RelDecision elem_of → ∀ (A: Type) (m: M A) (X1 X2: D), X1 ## X2 → dom D m = X1 ∪ X2 → ∃ m1 m2: M A, m = m1 ∪ m2 ∧ m1 ##ₘ m2 ∧ dom D m1 = X1 ∧ dom D m2 = X2\nmap_cross_split:\n  ∀ (K: Type) (M: Type → Type) (H: FMap M) (H0: ∀ A: Type, Lookup K A (M A)) (H1: ∀ A: Type, Empty (M A)) (H2: ∀ A: Type, PartialAlter K A (M A))\n    (H3: OMap M) (H4: Merge M) (H5: ∀ A: Type, FinMapToList K A (M A)) (EqDecision0: EqDecision K),\n    FinMap K M\n    → ∀ (A: Type) (ma mb mc md: M A),\n        ma ##ₘ mb\n        → mc ##ₘ md\n          → ma ∪ mb = mc ∪ md → ∃ mac mad mbc mbd: M A, mac ##ₘ mad ∧ mbc ##ₘ mbd ∧ mac ##ₘ mbc ∧ mad ##ₘ mbd ∧ mac ∪ mad = ma ∧ mbc ∪ mbd = mb ∧ mac ∪ mbc = mc ∧ mad ∪ mbd = md\ndom_union_inv:\n  ∀ (K: Type) (M: Type → Type) (D: Type) (H: ∀ A: Type, Dom (M A) D) (H0: FMap M) (H1: ∀ A: Type, Lookup K A (M A)) (H2: ∀ A: Type, Empty (M A))\n    (H3: ∀ A: Type, PartialAlter K A (M A)) (H4: OMap M) (H5: Merge M) (H6: ∀ A: Type, FinMapToList K A (M A)) (EqDecision0: EqDecision K) (H7: ElemOf K D)\n    (H8: Empty D) (H9: Singleton K D) (H10: Union D) (H11: Intersection D) (H12: Difference D),\n    FinMapDom K M D → RelDecision elem_of → ∀ (A: Type) (m: M A) (X1 X2: D), X1 ## X2 → dom D m ≡ X1 ∪ X2 → ∃ m1 m2: M A, m = m1 ∪ m2 ∧ m1 ##ₘ m2 ∧ dom D m1 ≡ X1 ∧ dom D m2 ≡ X2\n\n\n\n\n\nH0: registers s1 ##ₘ registers s2 ∪ registers s3\nmap_union_assoc:\n  ∀ (K : Type) (M : Type → Type) (H : FMap M) (H0 : ∀ A : Type, Lookup K A (M A)) (H1 : ∀ A : Type, Empty (M A)) (H2 : ∀ A : Type, PartialAlter K A (M A))\n    (H3 : OMap M) (H4 : Merge M) (H5 : ∀ A : Type, FinMapToList K A (M A)) (EqDecision0 : EqDecision K), FinMap K M → ∀ A : Type, Assoc eq union\nmap_union_idemp:\n  ∀ (K : Type) (M : Type → Type) (H : FMap M) (H0 : ∀ A : Type, Lookup K A (M A)) (H1 : ∀ A : Type, Empty (M A)) (H2 : ∀ A : Type, PartialAlter K A (M A))\n    (H3 : OMap M) (H4 : Merge M) (H5 : ∀ A : Type, FinMapToList K A (M A)) (EqDecision0 : EqDecision K), FinMap K M → ∀ A : Type, IdemP eq union\nH: program_counter s1 ##ₘ program_counter s2 ∪ program_counter s3\nmap_union_empty:\n  ∀ (K : Type) (M : Type → Type) (H : FMap M) (H0 : ∀ A : Type, Lookup K A (M A)) (H1 : ∀ A : Type, Empty (M A)) (H2 : ∀ A : Type, PartialAlter K A (M A))\n    (H3 : OMap M) (H4 : Merge M) (H5 : ∀ A : Type, FinMapToList K A (M A)) (EqDecision0 : EqDecision K), FinMap K M → ∀ A : Type, RightId eq ∅ union\nmap_empty_union:\n  ∀ (K : Type) (M : Type → Type) (H : FMap M) (H0 : ∀ A : Type, Lookup K A (M A)) (H1 : ∀ A : Type, Empty (M A)) (H2 : ∀ A : Type, PartialAlter K A (M A))\n    (H3 : OMap M) (H4 : Merge M) (H5 : ∀ A : Type, FinMapToList K A (M A)) (EqDecision0 : EqDecision K), FinMap K M → ∀ A : Type, LeftId eq ∅ union\nmap_union_subseteq_l:\n  ∀ (K : Type) (M : Type → Type) (H : FMap M) (H0 : ∀ A : Type, Lookup K A (M A)) (H1 : ∀ A : Type, Empty (M A)) (H2 : ∀ A : Type, PartialAlter K A (M A))\n    (H3 : OMap M) (H4 : Merge M) (H5 : ∀ A : Type, FinMapToList K A (M A)) (EqDecision0 : EqDecision K), FinMap K M → ∀ (A : Type) (m1 m2 : M A), m1 ⊆ m1 ∪ m2\nmap_subseteq_union:\n  ∀ (K : Type) (M : Type → Type) (H : FMap M) (H0 : ∀ A : Type, Lookup K A (M A)) (H1 : ∀ A : Type, Empty (M A)) (H2 : ∀ A : Type, PartialAlter K A (M A))\n    (H3 : OMap M) (H4 : Merge M) (H5 : ∀ A : Type, FinMapToList K A (M A)) (EqDecision0 : EqDecision K), FinMap K M → ∀ (A : Type) (m1 m2 : M A), m1 ⊆ m2 → m1 ∪ m2 = m2\nmap_union_comm:\n  ∀ (K : Type) (M : Type → Type) (H : FMap M) (H0 : ∀ A : Type, Lookup K A (M A)) (H1 : ∀ A : Type, Empty (M A)) (H2 : ∀ A : Type, PartialAlter K A (M A))\n    (H3 : OMap M) (H4 : Merge M) (H5 : ∀ A : Type, FinMapToList K A (M A)) (EqDecision0 : EqDecision K), FinMap K M → ∀ (A : Type) (m1 m2 : M A), m1 ##ₘ m2 → m1 ∪ m2 = m2 ∪ m1\nmap_union_subseteq_r:\n  ∀ (K : Type) (M : Type → Type) (H : FMap M) (H0 : ∀ A : Type, Lookup K A (M A)) (H1 : ∀ A : Type, Empty (M A)) (H2 : ∀ A : Type, PartialAlter K A (M A))\n    (H3 : OMap M) (H4 : Merge M) (H5 : ∀ A : Type, FinMapToList K A (M A)) (EqDecision0 : EqDecision K), FinMap K M → ∀ (A : Type) (m1 m2 : M A), m1 ##ₘ m2 → m2 ⊆ m1 ∪ m2\nmap_positive_l:\n  ∀ (K : Type) (M : Type → Type) (H : FMap M) (H0 : ∀ A : Type, Lookup K A (M A)) (H1 : ∀ A : Type, Empty (M A)) (H2 : ∀ A : Type, PartialAlter K A (M A))\n    (H3 : OMap M) (H4 : Merge M) (H5 : ∀ A : Type, FinMapToList K A (M A)) (EqDecision0 : EqDecision K), FinMap K M → ∀ (A : Type) (m1 m2 : M A), m1 ∪ m2 = ∅ → m1 = ∅\nmap_disjoint_union_l_2:\n  ∀ (K : Type) (M : Type → Type) (H : FMap M) (H0 : ∀ A : Type, Lookup K A (M A)) (H1 : ∀ A : Type, Empty (M A)) (H2 : ∀ A : Type, PartialAlter K A (M A))\n    (H3 : OMap M) (H4 : Merge M) (H5 : ∀ A : Type, FinMapToList K A (M A)) (EqDecision0 : EqDecision K),\n    FinMap K M → ∀ (A : Type) (m1 m2 m3 : M A), m1 ##ₘ m3 → m2 ##ₘ m3 → m1 ∪ m2 ##ₘ m3\nmap_disjoint_union_r_2:\n  ∀ (K : Type) (M : Type → Type) (H : FMap M) (H0 : ∀ A : Type, Lookup K A (M A)) (H1 : ∀ A : Type, Empty (M A)) (H2 : ∀ A : Type, PartialAlter K A (M A))\n    (H3 : OMap M) (H4 : Merge M) (H5 : ∀ A : Type, FinMapToList K A (M A)) (EqDecision0 : EqDecision K),\n    FinMap K M → ∀ (A : Type) (m1 m2 m3 : M A), m1 ##ₘ m2 → m1 ##ₘ m3 → m1 ##ₘ m2 ∪ m3\nmap_Forall_union_1_1:\n  ∀ (K : Type) (M : Type → Type) (H : FMap M) (H0 : ∀ A : Type, Lookup K A (M A)) (H1 : ∀ A : Type, Empty (M A)) (H2 : ∀ A : Type, PartialAlter K A (M A))\n    (H3 : OMap M) (H4 : Merge M) (H5 : ∀ A : Type, FinMapToList K A (M A)) (EqDecision0 : EqDecision K),\n    FinMap K M → ∀ (A : Type) (m1 m2 : M A) (P : K → A → Prop), map_Forall P (m1 ∪ m2) → map_Forall P m1\nmap_union_subseteq_l':\n  ∀ (K : Type) (M : Type → Type) (H : FMap M) (H0 : ∀ A : Type, Lookup K A (M A)) (H1 : ∀ A : Type, Empty (M A)) (H2 : ∀ A : Type, PartialAlter K A (M A))\n    (H3 : OMap M) (H4 : Merge M) (H5 : ∀ A : Type, FinMapToList K A (M A)) (EqDecision0 : EqDecision K), FinMap K M → ∀ (A : Type) (m1 m2 m3 : M A), m1 ⊆ m2 → m1 ⊆ m2 ∪ m3\nmap_positive_l_alt:\n  ∀ (K : Type) (M : Type → Type) (H : FMap M) (H0 : ∀ A : Type, Lookup K A (M A)) (H1 : ∀ A : Type, Empty (M A)) (H2 : ∀ A : Type, PartialAlter K A (M A))\n    (H3 : OMap M) (H4 : Merge M) (H5 : ∀ A : Type, FinMapToList K A (M A)) (EqDecision0 : EqDecision K), FinMap K M → ∀ (A : Type) (m1 m2 : M A), m1 ≠ ∅ → m1 ∪ m2 ≠ ∅\nmap_disjoint_union_list_l_2:\n  ∀ (K : Type) (M : Type → Type) (H : FMap M) (H0 : ∀ A : Type, Lookup K A (M A)) (H1 : ∀ A : Type, Empty (M A)) (H2 : ∀ A : Type, PartialAlter K A (M A))\n    (H3 : OMap M) (H4 : Merge M) (H5 : ∀ A : Type, FinMapToList K A (M A)) (EqDecision0 : EqDecision K),\n    FinMap K M → ∀ (A : Type) (ms : list (M A)) (m : M A), Forall (λ m2 : M A, m2 ##ₘ m) ms → ⋃ ms ##ₘ m\nmap_disjoint_union_list_r_2:\n  ∀ (K : Type) (M : Type → Type) (H : FMap M) (H0 : ∀ A : Type, Lookup K A (M A)) (H1 : ∀ A : Type, Empty (M A)) (H2 : ∀ A : Type, PartialAlter K A (M A))\n    (H3 : OMap M) (H4 : Merge M) (H5 : ∀ A : Type, FinMapToList K A (M A)) (EqDecision0 : EqDecision K),\n    FinMap K M → ∀ (A : Type) (ms : list (M A)) (m : M A), Forall (λ m2 : M A, m2 ##ₘ m) ms → m ##ₘ ⋃ ms\nmap_disjoint_union_r:\n  ∀ (K : Type) (M : Type → Type) (H : FMap M) (H0 : ∀ A : Type, Lookup K A (M A)) (H1 : ∀ A : Type, Empty (M A)) (H2 : ∀ A : Type, PartialAlter K A (M A))\n    (H3 : OMap M) (H4 : Merge M) (H5 : ∀ A : Type, FinMapToList K A (M A)) (EqDecision0 : EqDecision K),\n    FinMap K M → ∀ (A : Type) (m1 m2 m3 : M A), m1 ##ₘ m2 ∪ m3 ↔ m1 ##ₘ m2 ∧ m1 ##ₘ m3\nmap_disjoint_union_l:\n  ∀ (K : Type) (M : Type → Type) (H : FMap M) (H0 : ∀ A : Type, Lookup K A (M A)) (H1 : ∀ A : Type, Empty (M A)) (H2 : ∀ A : Type, PartialAlter K A (M A))\n    (H3 : OMap M) (H4 : Merge M) (H5 : ∀ A : Type, FinMapToList K A (M A)) (EqDecision0 : EqDecision K),\n    FinMap K M → ∀ (A : Type) (m1 m2 m3 : M A), m1 ∪ m2 ##ₘ m3 ↔ m1 ##ₘ m3 ∧ m2 ##ₘ m3\nmap_disjoint_union_list_l:\n  ∀ (K : Type) (M : Type → Type) (H : FMap M) (H0 : ∀ A : Type, Lookup K A (M A)) (H1 : ∀ A : Type, Empty (M A)) (H2 : ∀ A : Type, PartialAlter K A (M A))\n    (H3 : OMap M) (H4 : Merge M) (H5 : ∀ A : Type, FinMapToList K A (M A)) (EqDecision0 : EqDecision K),\n    FinMap K M → ∀ (A : Type) (ms : list (M A)) (m : M A), ⋃ ms ##ₘ m ↔ Forall (λ m2 : M A, m2 ##ₘ m) ms\nmap_disjoint_union_list_r:\n  ∀ (K : Type) (M : Type → Type) (H : FMap M) (H0 : ∀ A : Type, Lookup K A (M A)) (H1 : ∀ A : Type, Empty (M A)) (H2 : ∀ A : Type, PartialAlter K A (M A))\n    (H3 : OMap M) (H4 : Merge M) (H5 : ∀ A : Type, FinMapToList K A (M A)) (EqDecision0 : EqDecision K),\n    FinMap K M → ∀ (A : Type) (ms : list (M A)) (m : M A), m ##ₘ ⋃ ms ↔ Forall (λ m2 : M A, m2 ##ₘ m) ms\nmap_difference_union:\n  ∀ (K : Type) (M : Type → Type) (H : FMap M) (H0 : ∀ A : Type, Lookup K A (M A)) (H1 : ∀ A : Type, Empty (M A)) (H2 : ∀ A : Type, PartialAlter K A (M A))\n    (H3 : OMap M) (H4 : Merge M) (H5 : ∀ A : Type, FinMapToList K A (M A)) (EqDecision0 : EqDecision K), FinMap K M → ∀ (A : Type) (m1 m2 : M A), m1 ⊆ m2 → m1 ∪ m2 ∖ m1 = m2\nmap_Forall_union_1_2:\n  ∀ (K : Type) (M : Type → Type) (H : FMap M) (H0 : ∀ A : Type, Lookup K A (M A)) (H1 : ∀ A : Type, Empty (M A)) (H2 : ∀ A : Type, PartialAlter K A (M A))\n    (H3 : OMap M) (H4 : Merge M) (H5 : ∀ A : Type, FinMapToList K A (M A)) (EqDecision0 : EqDecision K),\n    FinMap K M → ∀ (A : Type) (m1 m2 : M A) (P : K → A → Prop), m1 ##ₘ m2 → map_Forall P (m1 ∪ m2) → map_Forall P m2\nmap_Forall_union_2:\n  ∀ (K : Type) (M : Type → Type) (H : FMap M) (H0 : ∀ A : Type, Lookup K A (M A)) (H1 : ∀ A : Type, Empty (M A)) (H2 : ∀ A : Type, PartialAlter K A (M A))\n    (H3 : OMap M) (H4 : Merge M) (H5 : ∀ A : Type, FinMapToList K A (M A)) (EqDecision0 : EqDecision K),\n    FinMap K M → ∀ (A : Type) (m1 m2 : M A) (P : K → A → Prop), map_Forall P m1 → map_Forall P m2 → map_Forall P (m1 ∪ m2)\nmap_union_mono_l:\n  ∀ (K : Type) (M : Type → Type) (H : FMap M) (H0 : ∀ A : Type, Lookup K A (M A)) (H1 : ∀ A : Type, Empty (M A)) (H2 : ∀ A : Type, PartialAlter K A (M A))\n    (H3 : OMap M) (H4 : Merge M) (H5 : ∀ A : Type, FinMapToList K A (M A)) (EqDecision0 : EqDecision K), FinMap K M → ∀ (A : Type) (m1 m2 m3 : M A), m1 ⊆ m2 → m3 ∪ m1 ⊆ m3 ∪ m2\nmap_fmap_union:\n  ∀ (K : Type) (M : Type → Type) (H : FMap M) (H0 : ∀ A : Type, Lookup K A (M A)) (H1 : ∀ A : Type, Empty (M A)) (H2 : ∀ A : Type, PartialAlter K A (M A))\n    (H3 : OMap M) (H4 : Merge M) (H5 : ∀ A : Type, FinMapToList K A (M A)) (EqDecision0 : EqDecision K),\n    FinMap K M → ∀ (A B : Type) (f : A → B) (m1 m2 : M A), f <$> m1 ∪ m2 = (f <$> m1) ∪ (f <$> m2)\nmap_union_subseteq_r':\n  ∀ (K : Type) (M : Type → Type) (H : FMap M) (H0 : ∀ A : Type, Lookup K A (M A)) (H1 : ∀ A : Type, Empty (M A)) (H2 : ∀ A : Type, PartialAlter K A (M A))\n    (H3 : OMap M) (H4 : Merge M) (H5 : ∀ A : Type, FinMapToList K A (M A)) (EqDecision0 : EqDecision K), FinMap K M → ∀ (A : Type) (m1 m2 m3 : M A), m2 ##ₘ m3 → m1 ⊆ m3 → m1 ⊆ m2 ∪ m3\nmap_union_least:\n  ∀ (K : Type) (M : Type → Type) (H : FMap M) (H0 : ∀ A : Type, Lookup K A (M A)) (H1 : ∀ A : Type, Empty (M A)) (H2 : ∀ A : Type, PartialAlter K A (M A))\n    (H3 : OMap M) (H4 : Merge M) (H5 : ∀ A : Type, FinMapToList K A (M A)) (EqDecision0 : EqDecision K), FinMap K M → ∀ (A : Type) (m1 m2 m3 : M A), m1 ⊆ m3 → m2 ⊆ m3 → m1 ∪ m2 ⊆ m3\nmap_union_cancel_l:\n  ∀ (K : Type) (M : Type → Type) (H : FMap M) (H0 : ∀ A : Type, Lookup K A (M A)) (H1 : ∀ A : Type, Empty (M A)) (H2 : ∀ A : Type, PartialAlter K A (M A))\n    (H3 : OMap M) (H4 : Merge M) (H5 : ∀ A : Type, FinMapToList K A (M A)) (EqDecision0 : EqDecision K),\n    FinMap K M → ∀ (A : Type) (m1 m2 m3 : M A), m1 ##ₘ m3 → m2 ##ₘ m3 → m3 ∪ m1 = m3 ∪ m2 → m1 = m2\nmap_union_cancel_r:\n  ∀ (K : Type) (M : Type → Type) (H : FMap M) (H0 : ∀ A : Type, Lookup K A (M A)) (H1 : ∀ A : Type, Empty (M A)) (H2 : ∀ A : Type, PartialAlter K A (M A))\n    (H3 : OMap M) (H4 : Merge M) (H5 : ∀ A : Type, FinMapToList K A (M A)) (EqDecision0 : EqDecision K),\n    FinMap K M → ∀ (A : Type) (m1 m2 m3 : M A), m1 ##ₘ m3 → m2 ##ₘ m3 → m1 ∪ m3 = m2 ∪ m3 → m1 = m2\nlookup_union_l:\n  ∀ (K : Type) (M : Type → Type) (H : FMap M) (H0 : ∀ A : Type, Lookup K A (M A)) (H1 : ∀ A : Type, Empty (M A)) (H2 : ∀ A : Type, PartialAlter K A (M A))\n    (H3 : OMap M) (H4 : Merge M) (H5 : ∀ A : Type, FinMapToList K A (M A)) (EqDecision0 : EqDecision K),\n    FinMap K M → ∀ (A : Type) (m1 m2 : M A) (i : K), is_Some (m1 !! i) → (m1 ∪ m2) !! i = m1 !! i\nlookup_union_Some_l:\n  ∀ (K : Type) (M : Type → Type) (H : FMap M) (H0 : ∀ A : Type, Lookup K A (M A)) (H1 : ∀ A : Type, Empty (M A)) (H2 : ∀ A : Type, PartialAlter K A (M A))\n    (H3 : OMap M) (H4 : Merge M) (H5 : ∀ A : Type, FinMapToList K A (M A)) (EqDecision0 : EqDecision K),\n    FinMap K M → ∀ (A : Type) (m1 m2 : M A) (i : K) (x : A), m1 !! i = Some x → (m1 ∪ m2) !! i = Some x\ninsert_union_singleton_l:\n  ∀ (K : Type) (M : Type → Type) (H : FMap M) (H0 : ∀ A : Type, Lookup K A (M A)) (H1 : ∀ A : Type, Empty (M A)) (H2 : ∀ A : Type, PartialAlter K A (M A))\n    (H3 : OMap M) (H4 : Merge M) (H5 : ∀ A : Type, FinMapToList K A (M A)) (EqDecision0 : EqDecision K),\n    FinMap K M → ∀ (A : Type) (m : M A) (i : K) (x : A), <[i:=x]> m = {[i := x]} ∪ m\nmap_union_mono_r:\n  ∀ (K : Type) (M : Type → Type) (H : FMap M) (H0 : ∀ A : Type, Lookup K A (M A)) (H1 : ∀ A : Type, Empty (M A)) (H2 : ∀ A : Type, PartialAlter K A (M A))\n    (H3 : OMap M) (H4 : Merge M) (H5 : ∀ A : Type, FinMapToList K A (M A)) (EqDecision0 : EqDecision K),\n    FinMap K M → ∀ (A : Type) (m1 m2 m3 : M A), m2 ##ₘ m3 → m1 ⊆ m2 → m1 ∪ m3 ⊆ m2 ∪ m3\nlookup_union_r:\n  ∀ (K : Type) (M : Type → Type) (H : FMap M) (H0 : ∀ A : Type, Lookup K A (M A)) (H1 : ∀ A : Type, Empty (M A)) (H2 : ∀ A : Type, PartialAlter K A (M A))\n    (H3 : OMap M) (H4 : Merge M) (H5 : ∀ A : Type, FinMapToList K A (M A)) (EqDecision0 : EqDecision K),\n    FinMap K M → ∀ (A : Type) (m1 m2 : M A) (i : K), m1 !! i = None → (m1 ∪ m2) !! i = m2 !! i\nlookup_union_is_Some:\n  ∀ (K : Type) (M : Type → Type) (H : FMap M) (H0 : ∀ A : Type, Lookup K A (M A)) (H1 : ∀ A : Type, Empty (M A)) (H2 : ∀ A : Type, PartialAlter K A (M A))\n    (H3 : OMap M) (H4 : Merge M) (H5 : ∀ A : Type, FinMapToList K A (M A)) (EqDecision0 : EqDecision K),\n    FinMap K M → ∀ (A : Type) (m1 m2 : M A) (i : K), is_Some ((m1 ∪ m2) !! i) ↔ is_Some (m1 !! i) ∨ is_Some (m2 !! i)\nmap_omap_union:\n  ∀ (K : Type) (M : Type → Type) (H : FMap M) (H0 : ∀ A : Type, Lookup K A (M A)) (H1 : ∀ A : Type, Empty (M A)) (H2 : ∀ A : Type, PartialAlter K A (M A))\n    (H3 : OMap M) (H4 : Merge M) (H5 : ∀ A : Type, FinMapToList K A (M A)) (EqDecision0 : EqDecision K),\n    FinMap K M → ∀ (A B : Type) (f : A → option B) (m1 m2 : M A), m1 ##ₘ m2 → omap f (m1 ∪ m2) = omap f m1 ∪ omap f m2\ninsert_union_l:\n  ∀ (K : Type) (M : Type → Type) (H : FMap M) (H0 : ∀ A : Type, Lookup K A (M A)) (H1 : ∀ A : Type, Empty (M A)) (H2 : ∀ A : Type, PartialAlter K A (M A))\n    (H3 : OMap M) (H4 : Merge M) (H5 : ∀ A : Type, FinMapToList K A (M A)) (EqDecision0 : EqDecision K),\n    FinMap K M → ∀ (A : Type) (m1 m2 : M A) (i : K) (x : A), <[i:=x]> (m1 ∪ m2) = <[i:=x]> m1 ∪ m2\nunion_singleton_r:\n  ∀ (K : Type) (M : Type → Type) (H : FMap M) (H0 : ∀ A : Type, Lookup K A (M A)) (H1 : ∀ A : Type, Empty (M A)) (H2 : ∀ A : Type, PartialAlter K A (M A))\n    (H3 : OMap M) (H4 : Merge M) (H5 : ∀ A : Type, FinMapToList K A (M A)) (EqDecision0 : EqDecision K),\n    FinMap K M → ∀ (A : Type) (m : M A) (i : K) (x y : A), m !! i = Some x → m ∪ {[i := y]} = m\nmap_Forall_union:\n  ∀ (K : Type) (M : Type → Type) (H : FMap M) (H0 : ∀ A : Type, Lookup K A (M A)) (H1 : ∀ A : Type, Empty (M A)) (H2 : ∀ A : Type, PartialAlter K A (M A))\n    (H3 : OMap M) (H4 : Merge M) (H5 : ∀ A : Type, FinMapToList K A (M A)) (EqDecision0 : EqDecision K),\n    FinMap K M → ∀ (A : Type) (m1 m2 : M A) (P : K → A → Prop), m1 ##ₘ m2 → map_Forall P (m1 ∪ m2) ↔ map_Forall P m1 ∧ map_Forall P m2\nlookup_union_Some_r:\n  ∀ (K : Type) (M : Type → Type) (H : FMap M) (H0 : ∀ A : Type, Lookup K A (M A)) (H1 : ∀ A : Type, Empty (M A)) (H2 : ∀ A : Type, PartialAlter K A (M A))\n    (H3 : OMap M) (H4 : Merge M) (H5 : ∀ A : Type, FinMapToList K A (M A)) (EqDecision0 : EqDecision K),\n    FinMap K M → ∀ (A : Type) (m1 m2 : M A) (i : K) (x : A), m1 ##ₘ m2 → m2 !! i = Some x → (m1 ∪ m2) !! i = Some x\nmap_union_reflecting_r:\n  ∀ (K : Type) (M : Type → Type) (H : FMap M) (H0 : ∀ A : Type, Lookup K A (M A)) (H1 : ∀ A : Type, Empty (M A)) (H2 : ∀ A : Type, PartialAlter K A (M A))\n    (H3 : OMap M) (H4 : Merge M) (H5 : ∀ A : Type, FinMapToList K A (M A)) (EqDecision0 : EqDecision K),\n    FinMap K M → ∀ (A : Type) (m1 m2 m3 : M A), m1 ##ₘ m3 → m2 ##ₘ m3 → m1 ∪ m3 ⊆ m2 ∪ m3 → m1 ⊆ m2\nmap_union_reflecting_l:\n  ∀ (K : Type) (M : Type → Type) (H : FMap M) (H0 : ∀ A : Type, Lookup K A (M A)) (H1 : ∀ A : Type, Empty (M A)) (H2 : ∀ A : Type, PartialAlter K A (M A))\n    (H3 : OMap M) (H4 : Merge M) (H5 : ∀ A : Type, FinMapToList K A (M A)) (EqDecision0 : EqDecision K),\n    FinMap K M → ∀ (A : Type) (m1 m2 m3 : M A), m3 ##ₘ m1 → m3 ##ₘ m2 → m3 ∪ m1 ⊆ m3 ∪ m2 → m1 ⊆ m2\nlookup_union:\n  ∀ (K : Type) (M : Type → Type) (H : FMap M) (H0 : ∀ A : Type, Lookup K A (M A)) (H1 : ∀ A : Type, Empty (M A)) (H2 : ∀ A : Type, PartialAlter K A (M A))\n    (H3 : OMap M) (H4 : Merge M) (H5 : ∀ A : Type, FinMapToList K A (M A)) (EqDecision0 : EqDecision K),\n    FinMap K M → ∀ (A : Type) (m1 m2 : M A) (i : K), (m1 ∪ m2) !! i = union_with (λ x _ : A, Some x) (m1 !! i) (m2 !! i)\ndelete_union:\n  ∀ (K : Type) (M : Type → Type) (H : FMap M) (H0 : ∀ A : Type, Lookup K A (M A)) (H1 : ∀ A : Type, Empty (M A)) (H2 : ∀ A : Type, PartialAlter K A (M A))\n    (H3 : OMap M) (H4 : Merge M) (H5 : ∀ A : Type, FinMapToList K A (M A)) (EqDecision0 : EqDecision K),\n    FinMap K M → ∀ (A : Type) (m1 m2 : M A) (i : K), delete i (m1 ∪ m2) = delete i m1 ∪ delete i m2\nmap_size_disj_union:\n  ∀ (K : Type) (M : Type → Type) (H : FMap M) (H0 : ∀ A : Type, Lookup K A (M A)) (H1 : ∀ A : Type, Empty (M A)) (H2 : ∀ A : Type, PartialAlter K A (M A))\n    (H3 : OMap M) (H4 : Merge M) (H5 : ∀ A : Type, FinMapToList K A (M A)) (EqDecision0 : EqDecision K),\n    FinMap K M → ∀ (A : Type) (m1 m2 : M A), m1 ##ₘ m2 → base.size (m1 ∪ m2) = base.size m1 + base.size m2\nunion_proper:\n  ∀ (K : Type) (M : Type → Type) (H : FMap M) (H0 : ∀ A : Type, Lookup K A (M A)) (H1 : ∀ A : Type, Empty (M A)) (H2 : ∀ A : Type, PartialAlter K A (M A))\n    (H3 : OMap M) (H4 : Merge M) (H5 : ∀ A : Type, FinMapToList K A (M A)) (EqDecision0 : EqDecision K),\n    FinMap K M → ∀ (A : Type) (H7 : Equiv A), Proper (equiv ==> equiv ==> equiv) union\nlookup_union_Some_inv_r:\n  ∀ (K : Type) (M : Type → Type) (H : FMap M) (H0 : ∀ A : Type, Lookup K A (M A)) (H1 : ∀ A : Type, Empty (M A)) (H2 : ∀ A : Type, PartialAlter K A (M A))\n    (H3 : OMap M) (H4 : Merge M) (H5 : ∀ A : Type, FinMapToList K A (M A)) (EqDecision0 : EqDecision K),\n    FinMap K M → ∀ (A : Type) (m1 m2 : M A) (i : K) (x : A), (m1 ∪ m2) !! i = Some x → m1 !! i = None → m2 !! i = Some x\nlookup_union_Some_inv_l:\n  ∀ (K : Type) (M : Type → Type) (H : FMap M) (H0 : ∀ A : Type, Lookup K A (M A)) (H1 : ∀ A : Type, Empty (M A)) (H2 : ∀ A : Type, PartialAlter K A (M A))\n    (H3 : OMap M) (H4 : Merge M) (H5 : ∀ A : Type, FinMapToList K A (M A)) (EqDecision0 : EqDecision K),\n    FinMap K M → ∀ (A : Type) (m1 m2 : M A) (i : K) (x : A), (m1 ∪ m2) !! i = Some x → m2 !! i = None → m1 !! i = Some x\nlookup_union_None:\n  ∀ (K : Type) (M : Type → Type) (H : FMap M) (H0 : ∀ A : Type, Lookup K A (M A)) (H1 : ∀ A : Type, Empty (M A)) (H2 : ∀ A : Type, PartialAlter K A (M A))\n    (H3 : OMap M) (H4 : Merge M) (H5 : ∀ A : Type, FinMapToList K A (M A)) (EqDecision0 : EqDecision K),\n    FinMap K M → ∀ (A : Type) (m1 m2 : M A) (i : K), (m1 ∪ m2) !! i = None ↔ m1 !! i = None ∧ m2 !! i = None\ninsert_union_singleton_r:\n  ∀ (K : Type) (M : Type → Type) (H : FMap M) (H0 : ∀ A : Type, Lookup K A (M A)) (H1 : ∀ A : Type, Empty (M A)) (H2 : ∀ A : Type, PartialAlter K A (M A))\n    (H3 : OMap M) (H4 : Merge M) (H5 : ∀ A : Type, FinMapToList K A (M A)) (EqDecision0 : EqDecision K),\n    FinMap K M → ∀ (A : Type) (m : M A) (i : K) (x : A), m !! i = None → <[i:=x]> m = m ∪ {[i := x]}\nlist_to_map_app:\n  ∀ (K : Type) (M : Type → Type) (H : FMap M) (H0 : ∀ A : Type, Lookup K A (M A)) (H1 : ∀ A : Type, Empty (M A)) (H2 : ∀ A : Type, PartialAlter K A (M A))\n    (H3 : OMap M) (H4 : Merge M) (H5 : ∀ A : Type, FinMapToList K A (M A)) (EqDecision0 : EqDecision K),\n    FinMap K M → ∀ (A : Type) (l1 l2 : list (K * A)), list_to_map (l1 ++ l2) = list_to_map l1 ∪ list_to_map l2\ninsert_union_r:\n  ∀ (K : Type) (M : Type → Type) (H : FMap M) (H0 : ∀ A : Type, Lookup K A (M A)) (H1 : ∀ A : Type, Empty (M A)) (H2 : ∀ A : Type, PartialAlter K A (M A))\n    (H3 : OMap M) (H4 : Merge M) (H5 : ∀ A : Type, FinMapToList K A (M A)) (EqDecision0 : EqDecision K),\n    FinMap K M → ∀ (A : Type) (m1 m2 : M A) (i : K) (x : A), m1 !! i = None → <[i:=x]> (m1 ∪ m2) = m1 ∪ <[i:=x]> m2\nfoldr_insert_union:\n  ∀ (K : Type) (M : Type → Type) (H : FMap M) (H0 : ∀ A : Type, Lookup K A (M A)) (H1 : ∀ A : Type, Empty (M A)) (H2 : ∀ A : Type, PartialAlter K A (M A))\n    (H3 : OMap M) (H4 : Merge M) (H5 : ∀ A : Type, FinMapToList K A (M A)) (EqDecision0 : EqDecision K),\n    FinMap K M → ∀ (A : Type) (m : M A) (l : list (K * A)), foldr (λ p : K * A, <[p.1:=p.2]>) m l = list_to_map l ∪ m\nmap_seq_app:\n  ∀ (M : Type → Type) (H : FMap M) (H0 : ∀ A : Type, Lookup nat A (M A)) (H1 : ∀ A : Type, Empty (M A)) (H2 : ∀ A : Type, PartialAlter nat A (M A)) (H3 : OMap M)\n    (H4 : Merge M) (H5 : ∀ A : Type, FinMapToList nat A (M A)) (EqDecision0 : EqDecision nat),\n    FinMap nat M → ∀ (A : Type) (start : nat) (xs1 xs2 : list A), map_seq start (xs1 ++ xs2) = map_seq start xs1 ∪ map_seq (start + length xs1) xs2\nlookup_union_Some:\n  ∀ (K : Type) (M : Type → Type) (H : FMap M) (H0 : ∀ A : Type, Lookup K A (M A)) (H1 : ∀ A : Type, Empty (M A)) (H2 : ∀ A : Type, PartialAlter K A (M A))\n    (H3 : OMap M) (H4 : Merge M) (H5 : ∀ A : Type, FinMapToList K A (M A)) (EqDecision0 : EqDecision K),\n    FinMap K M → ∀ (A : Type) (m1 m2 : M A) (i : K) (x : A), m1 ##ₘ m2 → (m1 ∪ m2) !! i = Some x ↔ m1 !! i = Some x ∨ m2 !! i = Some x\nunion_delete_insert:\n  ∀ (K : Type) (M : Type → Type) (H : FMap M) (H0 : ∀ A : Type, Lookup K A (M A)) (H1 : ∀ A : Type, Empty (M A)) (H2 : ∀ A : Type, PartialAlter K A (M A))\n    (H3 : OMap M) (H4 : Merge M) (H5 : ∀ A : Type, FinMapToList K A (M A)) (EqDecision0 : EqDecision K),\n    FinMap K M → ∀ (A : Type) (m1 m2 : M A) (i : K) (x : A), m1 !! i = Some x → delete i m1 ∪ <[i:=x]> m2 = m1 ∪ m2\nfoldr_delete_union:\n  ∀ (K : Type) (M : Type → Type) (H : FMap M) (H0 : ∀ A : Type, Lookup K A (M A)) (H1 : ∀ A : Type, Empty (M A)) (H2 : ∀ A : Type, PartialAlter K A (M A))\n    (H3 : OMap M) (H4 : Merge M) (H5 : ∀ A : Type, FinMapToList K A (M A)) (EqDecision0 : EqDecision K),\n    FinMap K M → ∀ (A : Type) (m1 m2 : M A) (is : list K), foldr delete (m1 ∪ m2) is = foldr delete m1 is ∪ foldr delete m2 is\nlookup_union_Some_raw:\n  ∀ (K : Type) (M : Type → Type) (H : FMap M) (H0 : ∀ A : Type, Lookup K A (M A)) (H1 : ∀ A : Type, Empty (M A)) (H2 : ∀ A : Type, PartialAlter K A (M A))\n    (H3 : OMap M) (H4 : Merge M) (H5 : ∀ A : Type, FinMapToList K A (M A)) (EqDecision0 : EqDecision K),\n    FinMap K M → ∀ (A : Type) (m1 m2 : M A) (i : K) (x : A), (m1 ∪ m2) !! i = Some x ↔ m1 !! i = Some x ∨ m1 !! i = None ∧ m2 !! i = Some x\ndom_union:\n  ∀ (K : Type) (M : Type → Type) (D : Type) (H : ∀ A : Type, Dom (M A) D) (H0 : FMap M) (H1 : ∀ A : Type, Lookup K A (M A)) (H2 : ∀ A : Type, Empty (M A))\n    (H3 : ∀ A : Type, PartialAlter K A (M A)) (H4 : OMap M) (H5 : Merge M) (H6 : ∀ A : Type, FinMapToList K A (M A)) (EqDecision0 : EqDecision K) (H7 : ElemOf K D)\n    (H8 : Empty D) (H9 : Singleton K D) (H10 : Union D) (H11 : Intersection D) (H12 : Difference D),\n    FinMapDom K M D → ∀ (A : Type) (m1 m2 : M A), dom D (m1 ∪ m2) ≡ dom D m1 ∪ dom D m2\ndom_union_L:\n  ∀ (K : Type) (M : Type → Type) (D : Type) (H : ∀ A : Type, Dom (M A) D) (H0 : FMap M) (H1 : ∀ A : Type, Lookup K A (M A)) (H2 : ∀ A : Type, Empty (M A))\n    (H3 : ∀ A : Type, PartialAlter K A (M A)) (H4 : OMap M) (H5 : Merge M) (H6 : ∀ A : Type, FinMapToList K A (M A)) (EqDecision0 : EqDecision K) (H7 : ElemOf K D)\n    (H8 : Empty D) (H9 : Singleton K D) (H10 : Union D) (H11 : Intersection D) (H12 : Difference D),\n    FinMapDom K M D → LeibnizEquiv D → ∀ (A : Type) (m1 m2 : M A), dom D (m1 ∪ m2) = dom D m1 ∪ dom D m2\nmap_union_equiv_eq:\n  ∀ (K : Type) (M : Type → Type) (H : FMap M) (H0 : ∀ A : Type, Lookup K A (M A)) (H1 : ∀ A : Type, Empty (M A)) (H2 : ∀ A : Type, PartialAlter K A (M A))\n    (H3 : OMap M) (H4 : Merge M) (H5 : ∀ A : Type, FinMapToList K A (M A)) (EqDecision0 : EqDecision K),\n    FinMap K M → ∀ (A : Type) (H7 : Equiv A), Equivalence equiv → ∀ m1 m2a m2b : M A, m1 ≡ m2a ∪ m2b ↔ (∃ m2a' m2b' : M A, m1 = m2a' ∪ m2b' ∧ m2a' ≡ m2a ∧ m2b' ≡ m2b)\nunion_insert_delete:\n  ∀ (K : Type) (M : Type → Type) (H : FMap M) (H0 : ∀ A : Type, Lookup K A (M A)) (H1 : ∀ A : Type, Empty (M A)) (H2 : ∀ A : Type, PartialAlter K A (M A))\n    (H3 : OMap M) (H4 : Merge M) (H5 : ∀ A : Type, FinMapToList K A (M A)) (EqDecision0 : EqDecision K),\n    FinMap K M → ∀ (A : Type) (m1 m2 : M A) (i : K) (x : A), m1 !! i = None → m2 !! i = Some x → <[i:=x]> m1 ∪ delete i m2 = m1 ∪ m2\nmap_filter_union_complement:\n  ∀ (K : Type) (M : Type → Type) (H : FMap M) (H0 : ∀ A : Type, Lookup K A (M A)) (H1 : ∀ A : Type, Empty (M A)) (H2 : ∀ A : Type, PartialAlter K A (M A))\n    (H3 : OMap M) (H4 : Merge M) (H5 : ∀ A : Type, FinMapToList K A (M A)) (EqDecision0 : EqDecision K),\n    FinMap K M → ∀ (A : Type) (P : K * A → Prop) (H7 : ∀ x : K * A, Decision (P x)) (m : M A), filter P m ∪ filter (λ v : K * A, ¬ P v) m = m\nset_unfold_dom_union:\n  ∀ (K : Type) (M : Type → Type) (D : Type) (H : ∀ A : Type, Dom (M A) D) (H0 : FMap M) (H1 : ∀ A : Type, Lookup K A (M A)) (H2 : ∀ A : Type, Empty (M A))\n    (H3 : ∀ A : Type, PartialAlter K A (M A)) (H4 : OMap M) (H5 : Merge M) (H6 : ∀ A : Type, FinMapToList K A (M A)) (EqDecision0 : EqDecision K) (H7 : ElemOf K D)\n    (H8 : Empty D) (H9 : Singleton K D) (H10 : Union D) (H11 : Intersection D) (H12 : Difference D),\n    FinMapDom K M D\n    → ∀ (A : Type) (i : K) (m1 m2 : M A) (Q1 Q2 : Prop), SetUnfoldElemOf i (dom D m1) Q1 → SetUnfoldElemOf i (dom D m2) Q2 → SetUnfoldElemOf i (dom D (m1 ∪ m2)) (Q1 ∨ Q2)\nmap_filter_union:\n  ∀ (K : Type) (M : Type → Type) (H : FMap M) (H0 : ∀ A : Type, Lookup K A (M A)) (H1 : ∀ A : Type, Empty (M A)) (H2 : ∀ A : Type, PartialAlter K A (M A))\n    (H3 : OMap M) (H4 : Merge M) (H5 : ∀ A : Type, FinMapToList K A (M A)) (EqDecision0 : EqDecision K),\n    FinMap K M → ∀ (A : Type) (P : K * A → Prop) (H7 : ∀ x : K * A, Decision (P x)) (m1 m2 : M A), m1 ##ₘ m2 → filter P (m1 ∪ m2) = filter P m1 ∪ filter P m2\nkmap_union:\n  ∀ (K1 : Type) (M1 : Type → Type) (H : FMap M1) (H0 : ∀ A : Type, Lookup K1 A (M1 A)) (H1 : ∀ A : Type, Empty (M1 A)) (H2 : ∀ A : Type, PartialAlter K1 A (M1 A))\n    (H3 : OMap M1) (H4 : Merge M1) (H5 : ∀ A : Type, FinMapToList K1 A (M1 A)) (EqDecision0 : EqDecision K1),\n    FinMap K1 M1\n    → ∀ (K2 : Type) (M2 : Type → Type) (H7 : FMap M2) (H8 : ∀ A : Type, Lookup K2 A (M2 A)) (H9 : ∀ A : Type, Empty (M2 A)) (H10 : ∀ A : Type, PartialAlter K2 A (M2 A))\n        (H11 : OMap M2) (H12 : Merge M2) (H13 : ∀ A : Type, FinMapToList K2 A (M2 A)) (EqDecision1 : EqDecision K2),\n        FinMap K2 M2 → ∀ f : K1 → K2, Inj eq eq f → ∀ (A : Type) (m1 m2 : M1 A), kmap f (m1 ∪ m2) = kmap f m1 ∪ kmap f m2\ndom_union_inv_L:\n  ∀ (K : Type) (M : Type → Type) (D : Type) (H : ∀ A : Type, Dom (M A) D) (H0 : FMap M) (H1 : ∀ A : Type, Lookup K A (M A)) (H2 : ∀ A : Type, Empty (M A))\n    (H3 : ∀ A : Type, PartialAlter K A (M A)) (H4 : OMap M) (H5 : Merge M) (H6 : ∀ A : Type, FinMapToList K A (M A)) (EqDecision0 : EqDecision K) (H7 : ElemOf K D)\n    (H8 : Empty D) (H9 : Singleton K D) (H10 : Union D) (H11 : Intersection D) (H12 : Difference D),\n    FinMapDom K M D\n    → LeibnizEquiv D → RelDecision elem_of → ∀ (A : Type) (m : M A) (X1 X2 : D), X1 ## X2 → dom D m = X1 ∪ X2 → ∃ m1 m2 : M A, m = m1 ∪ m2 ∧ m1 ##ₘ m2 ∧ dom D m1 = X1 ∧ dom D m2 = X2\nmap_cross_split:\n  ∀ (K : Type) (M : Type → Type) (H : FMap M) (H0 : ∀ A : Type, Lookup K A (M A)) (H1 : ∀ A : Type, Empty (M A)) (H2 : ∀ A : Type, PartialAlter K A (M A))\n    (H3 : OMap M) (H4 : Merge M) (H5 : ∀ A : Type, FinMapToList K A (M A)) (EqDecision0 : EqDecision K),\n    FinMap K M\n    → ∀ (A : Type) (ma mb mc md : M A),\n        ma ##ₘ mb\n        → mc ##ₘ md\n          → ma ∪ mb = mc ∪ md → ∃ mac mad mbc mbd : M A, mac ##ₘ mad ∧ mbc ##ₘ mbd ∧ mac ##ₘ mbc ∧ mad ##ₘ mbd ∧ mac ∪ mad = ma ∧ mbc ∪ mbd = mb ∧ mac ∪ mbc = mc ∧ mad ∪ mbd = md\ndom_union_inv:\n  ∀ (K : Type) (M : Type → Type) (D : Type) (H : ∀ A : Type, Dom (M A) D) (H0 : FMap M) (H1 : ∀ A : Type, Lookup K A (M A)) (H2 : ∀ A : Type, Empty (M A))\n    (H3 : ∀ A : Type, PartialAlter K A (M A)) (H4 : OMap M) (H5 : Merge M) (H6 : ∀ A : Type, FinMapToList K A (M A)) (EqDecision0 : EqDecision K) (H7 : ElemOf K D)\n    (H8 : Empty D) (H9 : Singleton K D) (H10 : Union D) (H11 : Intersection D) (H12 : Difference D),\n    FinMapDom K M D → RelDecision elem_of → ∀ (A : Type) (m : M A) (X1 X2 : D), X1 ## X2 → dom D m ≡ X1 ∪ X2 → ∃ m1 m2 : M A, m = m1 ∪ m2 ∧ m1 ##ₘ m2 ∧ dom D m1 ≡ X1 ∧ dom D m2 ≡ X2\nmap_intersection_filter:\n  ∀ (K : Type) (M : Type → Type) (H : FMap M) (H0 : ∀ A : Type, Lookup K A (M A)) (H1 : ∀ A : Type, Empty (M A)) (H2 : ∀ A : Type, PartialAlter K A (M A))\n    (H3 : OMap M) (H4 : Merge M) (H5 : ∀ A : Type, FinMapToList K A (M A)) (EqDecision0 : EqDecision K),\n    FinMap K M → ∀ (A : Type) (m1 m2 : M A), m1 ∩ m2 = filter (λ kx : K * A, is_Some (m1 !! kx.1) ∧ is_Some (m2 !! kx.1)) (m1 ∪ m2)\n```\n"
  },
  {
    "path": "old/machine.v",
    "content": "Add LoadPath \"/home/blaine/lab/cpdtlib\" as Cpdt.\nSet Implicit Arguments. Set Asymmetric Patterns.\n(*Require Import List String Cpdt.CpdtTactics Coq.Program.Wf.*)\nFrom Coq Require Import Morphisms RelationClasses Setoid.\nRequire Import Cpdt.CpdtTactics.\nFrom stdpp Require Import base options fin gmap.\n(*Import ListNotations.*)\nRequire Import theorems.utils.\n\n(*Inductive Bit: Type := B0 | B1.*)\n(*Notation BitWord word_size := (vec Bit word_size).*)\n\n(*https://coq.inria.fr/library/Coq.Bool.Bvector.html*)\n(*https://github.com/coq-community/bits*)\n(*https://github.com/mit-plv/bbv*)\n(*https://github.com/jasmin-lang/coqword*)\n(*https://coq.inria.fr/library/Coq.PArith.BinPosDef.html*)\n\n\n(*Notation MemoryBank word_size := (gmap (BitWord word_size) (BitWord word_size)).*)\n(*Notation MemoryBank size := (gmap (fin size) nat).*)\n\n(*Definition bank: RegisterBank 3 := {[ 2%fin := 1 ]}.\nExample test_bank: (bank !!! 2%fin) = 1. Proof. reflexivity. Qed.\nDefinition empty_bank: RegisterBank 3 := empty.\nExample test_empty: (empty_bank !!! 2%fin) = 0. Proof. reflexivity. Qed.*)\n\n(*Notation RegisterBank word_size register_count := (gmap (fin register_count) (BitWord word_size)).*)\nNotation RegisterBank size := (gmap (fin size) nat).\n\n\n(*From stdpp Require Import natmap.\nDefinition test__natmap_lookup_m: natmap nat := {[ 3 := 2; 0 := 2 ]}.\nExample test__natmap_lookup: test__natmap_lookup_m !! 3 = Some 2.\nProof. reflexivity. Qed.\n\nExample test__vec_total_lookup: ([# 4; 2] !!! 0%fin) = 4.\nProof. reflexivity. Qed.*)\n\nModule AbstractMachine.\n\tParameter size: nat.\n\n\tRecord MachineState := machine_state {\n\t\tprogram_counter: RegisterBank 1;\n\t\tregisters: RegisterBank size\n\t}.\n\t(*Notation pc state := (state.(program_counter) !!! 0).*)\n\tGlobal Instance state_empty: Empty MachineState := machine_state empty empty.\n\tGlobal Instance state_union: Union MachineState := fun s1 s2 => (machine_state\n\t\t(union s1.(program_counter) s2.(program_counter))\n\t\t(union s1.(registers) s2.(registers))\n\t).\n\tTheorem state_equality c1 c2 r1 r2:\n\t\tc1 = c2 /\\ r1 = r2 <-> (machine_state c1 r1) = (machine_state c2 r2).\n\tProof. naive_solver. Qed.\n\n\tDefinition state_disjoint s1 s2 :=\n\t\tmap_disjoint s1.(program_counter) s2.(program_counter)\n\t\t/\\ map_disjoint s1.(registers) s2.(registers).\n\n\tGlobal Instance state_disjoint_symmetric: Symmetric state_disjoint.\n\tProof.\n\t\tintros ??; unfold state_disjoint;\n\t\trewrite !map_disjoint_spec; naive_solver.\n\tQed.\n\n\tTheorem state_union_commutative s1 s2:\n\t\tstate_disjoint s1 s2 -> union s1 s2 = union s2 s1.\n\tProof.\n\t\tintros [C R]; unfold union, state_union;\n\t\trewrite (map_union_comm _ _ C); rewrite (map_union_comm _ _ R);\n\t\treflexivity.\n\tQed.\n\n\tTheorem state_disjoint_union_distributive s1 s2 s3:\n\t\tstate_disjoint s1 (union s2 s3)\n\t\t<-> state_disjoint s1 s2 /\\ state_disjoint s1 s3.\n\tProof.\n\t\tsplit; unfold state_disjoint.\n\t\t- intros [?%map_disjoint_union_r ?%map_disjoint_union_r]; naive_solver.\n\t\t- intros [[] []]; split; apply map_disjoint_union_r_2; assumption.\n\tQed.\n\n\tTheorem state_union_associative (s1 s2 s3: MachineState):\n\t\tunion s1 (union s2 s3) = union (union s1 s2) s3.\n\tProof.\n\t\tunfold union, state_union; simpl;\n\t\tapply state_equality; apply map_union_assoc.\n\tQed.\n\n\tTheorem state_union_empty_l: forall state: MachineState, union empty state = state.\n\tProof.\n\t\tunfold union, state_union; intros []; simpl; do 2 rewrite map_empty_union; reflexivity.\n\tQed.\n\tTheorem state_union_empty_r: forall state: MachineState, union state empty = state.\n\tProof.\n\t\tunfold union, state_union; intros []; simpl; do 2 rewrite map_union_empty; reflexivity.\n\tQed.\n\n\tTheorem state_separate_counter_registers_disjoint: forall registers program_counter,\n\t\tstate_disjoint (machine_state empty registers) (machine_state program_counter empty).\n\tProof. intros; hnf; simpl; auto with map_disjoint. Qed.\n\n\tTheorem state_empty_disjoint: forall state, state_disjoint empty state.\n\tProof. intros; hnf; simpl; auto with map_disjoint. Qed.\n\n\tGlobal Hint Extern 0 (state_disjoint _ _) => (split; assumption): core.\n\n\tNotation Assertion := (MachineState -> Prop) (only parsing).\n\tDeclare Scope assertion_scope.\n\tOpen Scope assertion_scope.\n\n\n\tDefinition state_implies (H1 H2: Assertion): Prop :=\n\t\tforall state, H1 state -> H2 state.\n\tNotation \"H1 **> H2\" := (state_implies H1 H2) (at level 55).\n\tDefinition state_equivalent (H1 H2: Assertion): Prop :=\n\t\tforall state, H1 state <-> H2 state.\n\tNotation \"H1 <*> H2\" := (state_equivalent H1 H2) (at level 60).\n\n\tTheorem state_implies_reflexive: forall H, H **> H.\n\tProof. intros; hnf; trivial. Qed.\n\tHint Resolve state_implies_reflexive: core.\n\n\tTheorem state_implies_transitive: forall H2 H1 H3,\n\t\t(H1 **> H2) -> (H2 **> H3) -> (H1 **> H3).\n\tProof. intros ??? M1 M2 state H1state; auto. Qed.\n\tHint Resolve state_implies_transitive: core.\n\n\tTheorem state_implies_antisymmetric: forall H1 H2,\n\t\t(H1 **> H2) -> (H2 **> H1) -> H1 <*> H2.\n\tProof. intros ?? M1 M2; split; auto. Qed.\n\tHint Resolve state_implies_antisymmetric: core.\n\n\tTheorem state_implies_transitive_r: forall H2 H1 H3,\n\t\t(H2 **> H3) -> (H1 **> H2) -> (H1 **> H3).\n\tProof. intros ??? M1 M2; eapply state_implies_transitive; eauto. Qed.\n\tHint Resolve state_implies_transitive_r: core.\n\n\tTheorem state_operation_commutative:\n\t\tforall (operation: Assertion -> Assertion -> Assertion),\n\t\t\t(forall H1 H2, operation H1 H2 **> operation H2 H1)\n\t\t\t-> (forall H1 H2, operation H1 H2 <*> operation H2 H1).\n\tProof. intros; apply state_implies_antisymmetric; auto. Qed.\n\n\tGlobal Instance state_equivalent_reflexive: Reflexive state_equivalent.\n\tProof. constructor; apply state_implies_reflexive. Qed.\n\tGlobal Instance state_equivalent_symmetric: Symmetric state_equivalent.\n\tProof. hnf; intros ??[]; split; assumption. Qed.\n\tGlobal Instance state_equivalent_transitive: Transitive state_equivalent.\n\tProof. hnf; intros ???[][]; split; eapply state_implies_transitive; eauto. Qed.\n\tGlobal Instance state_equivalence: Equivalence state_equivalent.\n\tProof.\n\t\tconstructor; auto using state_equivalent_reflexive, state_equivalent_symmetric,state_equivalent_transitive.\n\tQed.\n\tGlobal Instance subrelation_state_equivalent_implies:\n\t\tsubrelation state_equivalent state_implies.\n\tProof. intros ??[]; assumption. Qed.\n\n\tGlobal Instance Proper_state_implies: Proper (state_equivalent ==> state_equivalent ==> flip impl) state_implies.\n\tProof. intros ??[]??[]; hnf; eauto. Qed.\n\n\tDefinition state_unknown: Assertion := fun state => state = empty.\n\tNotation \"\\[]\" := (state_unknown) (at level 0): assertion_scope.\n\n\tDefinition state_register (register: fin size) (value: nat): Assertion := fun state =>\n\t\tstate = machine_state empty (singletonM register value).\n\tNotation \"'$' register '==' value\" :=\n\t\t(state_register register%fin value) (at level 32): assertion_scope.\n\n\tDefinition state_counter (value: nat): Assertion := fun state =>\n\t\tstate = machine_state (singletonM 0%fin value) empty.\n\tNotation \"'pc_at' value\" := (state_counter value) (at level 32): assertion_scope.\n\n\tDefinition state_star (H1 H2: Assertion): Assertion :=\n\t\tfun state => exists s1 s2,\n\t\t\tH1 s1 /\\ H2 s2\n\t\t\t/\\ state_disjoint s1 s2\n\t\t\t/\\ state = union s1 s2.\n\tNotation \"H1 '\\*' H2\" :=\n\t\t(state_star H1 H2) (at level 41, right associativity): assertion_scope.\n\n\tDefinition state_exists (A: Type) (P: A -> Assertion): Assertion :=\n\t\tfun state => exists a, P a state.\n\tNotation \"'\\exists' a1 .. an , H\" :=\n\t\t(state_exists (fun a1 => .. (state_exists (fun an => H)) ..))\n\t\t(\n\t\t\tat level 39, a1 binder, H at level 50, right associativity,\n\t\t\tformat \"'[' '\\exists' '/ ' a1 .. an , '/ ' H ']'\"\n\t\t): assertion_scope.\n\n\tDefinition state_forall (A: Type) (P: A -> Assertion): Assertion :=\n\t\tfun state => forall a, P a state.\n\tNotation \"'\\forall' a1 .. an , H\" :=\n\t\t(state_forall (fun a1 => .. (state_forall (fun an => H)) ..))\n\t\t(\n\t\t\tat level 39, a1 binder, H at level 50, right associativity,\n\t\t\tformat \"'[' '\\forall' '/ '  a1  ..  an , '/ '  H ']'\"\n\t\t): assertion_scope.\n\n\n\tDefinition state_pure (P: Prop): Assertion :=\n\t\t\\exists (p:P), \\[].\n\tNotation \"\\[ P ]\" := (state_pure P)\n\t\t(at level 0, format \"\\[ P ]\"): assertion_scope.\n\n\tDefinition state_wand (H1 H2: Assertion): Assertion :=\n\t\t\\exists H0, H0 \\* state_pure ((H1 \\* H0) **> H2).\n\tNotation \"H1 \\-* H2\" := (state_wand H1 H2)\n\t\t(at level 43, right associativity): assertion_scope.\n\n\n\t(* ### state_unknown *)\n\tTheorem state_unknown_intro: \\[] empty.\n\tProof. hnf; trivial. Qed.\n\n\tTheorem state_unknown_inversion: forall state, \\[] state -> state = empty.\n\tProof. intros; hnf; auto. Qed.\n\n\t(* ### state_star *)\n\tTheorem state_star_intro: forall (H1 H2: Assertion) s1 s2,\n\t\tH1 s1 -> H2 s2 -> state_disjoint s1 s2\n\t\t-> (H1 \\* H2) (union s1 s2).\n\tProof. intros; exists s1, s2; auto. Qed.\n\n\tTheorem state_star_inversion: forall H1 H2 state,\n\t\t(H1 \\* H2) state -> exists s1 s2,\n\t\t\tH1 s1 /\\ H2 s2 /\\ state_disjoint s1 s2 /\\ state = union s1 s2.\n\tProof. intros ??? A; hnf in A; eauto. Qed.\n\n\tTheorem state_star_commutative: forall H1 H2,\n\t\t H1 \\* H2 <*> H2 \\* H1.\n\tProof.\n\t\tapply state_operation_commutative; unfold state_star;\n\t\tintros ?? state (s1 & s2 & ? & ? & [] & U);\n\t\trewrite state_union_commutative in U; trivial;\n\t\texists s2, s1; repeat split; auto.\n\tQed.\n\n\tTheorem state_star_associative H1 H2 H3:\n\t\t(H1 \\* H2) \\* H3 <*> H1 \\* (H2 \\* H3).\n\tProof.\n\t\tapply state_implies_antisymmetric.\n\t\t-\n\t\t\tintros state (state' & h3 & (h1 & h2 & ?&?&?&?)&?& D%symmetry &?); subst state';\n\t\t\texists h1, (union h2 h3);\n\t\t\trewrite state_union_associative; assert (D' := D);\n\t\t\tapply state_disjoint_union_distributive in D' as [?%symmetry ?%symmetry];\n\t\t\trepeat split; repeat apply state_star_intro; trivial;\n\t\t\tapply state_disjoint_union_distributive; split; trivial.\n\t\t-\n\t\t\tintros state (h1 & state' &?&(h2 & h3 &?&?&?&?)&D&?); subst state';\n\t\t\texists (union h1 h2), h3;\n\t\t\trewrite <-state_union_associative; assert (D' := D);\n\t\t\tapply state_disjoint_union_distributive in D' as [];\n\t\t\trepeat split; repeat apply state_star_intro; trivial; symmetry;\n\t\t\tapply state_disjoint_union_distributive; split; symmetry; trivial.\n\tQed.\n\n\tTheorem state_star_empty_l H:\n\t\t\\[] \\* H <*> H.\n\tProof.\n\t\tapply state_implies_antisymmetric; hnf.\n\t\t-\n\t\t\tintros ? [? (? & ?%state_unknown_inversion & ? & ? & ?)]; subst;\n\t\t\trewrite state_union_empty_l; assumption.\n\t\t-\n\t\t\tintros ?; exists empty, state; repeat split; simpl;\n\t\t\ttry apply state_unknown_intro; try apply map_disjoint_empty_l;\n\t\t\ttry rewrite state_union_empty_l; trivial.\n\tQed.\n\n\tTheorem state_star_empty_r H:\n\t\t H \\* \\[] <*> H.\n\tProof. rewrite state_star_commutative; apply state_star_empty_l. Qed.\n\n\tTheorem state_star_exists A (P: A -> Assertion) H:\n\t\t(\\exists a, P a) \\* H <*> \\exists a, (P a \\* H).\n\tProof.\n\t\tapply state_implies_antisymmetric; intros state.\n\t\t- intros (s1 & s2 & (a &?)&?&?&?); exists a, s1, s2; auto.\n\t\t- intros (a & (s1 & s2 &?&?&?&?)); exists s1, s2; split; auto; exists a; trivial.\n\tQed.\n\n\tTheorem state_implies_frame_l H2 H1 H1':\n\t\tH1 **> H1' -> (H1 \\* H2) **> (H1' \\* H2).\n\tProof. intros ?? (s1 & s2 & (?&?&?&?)); exists s1, s2; auto. Qed.\n\n\tTheorem state_implies_frame_r H1 H2 H2':\n\t\tH2 **> H2' -> (H1 \\* H2) **> (H1 \\* H2').\n\tProof. intros ?? (s1 & s2 & (?&?&?&?)); exists s1, s2; auto. Qed.\n\n\t(*Theorem state_implies_frame_r' H1 H2 H2':\n\t\tH2 **> H2' -> (H1 \\* H2) **> (H1 \\* H2').\n\tProof.\n\t\tintros ?; do 2 rewrite (state_star_commutative H1);\n\t\tapply state_implies_frame_l; assumption.\n\tQed.*)\n\n\tTheorem state_implies_frame H1 H1' H2 H2':\n\t\tH1 **> H1'\n\t\t-> H2 **> H2'\n\t\t-> (H1 \\* H2) **> (H1' \\* H2').\n\tProof. intros ??? (s1 & s2 & (?&?&?&?)); exists s1, s2; auto. Qed.\n\n\tTheorem state_implies_star_trans_l H1 H2 H3 H4:\n\t\tH1 **> H2\n\t\t-> H2 \\* H3 **> H4\n\t\t-> H1 \\* H3 **> H4.\n\tProof. intros M1 M2 ? (s1 & s2 & (?&?&?&?)); apply M2; exists s1, s2; auto. Qed.\n\n\tTheorem state_implies_star_trans_r H1 H2 H3 H4:\n\t\tH1 **> H2 ->\n\t\tH3 \\* H2 **> H4 ->\n\t\tH3 \\* H1 **> H4.\n\tProof. intros M1 M2 ? (s1 & s2 & (?&?&?&?)); apply M2; exists s1, s2; auto. Qed.\n\n\t(* ### state_pure *)\n\tTheorem state_pure_intro: forall P: Prop, P -> \\[P] empty.\n\tProof. intros ? P; exists P; hnf; auto. Qed.\n\n\tTheorem state_pure_inversion: forall P state, \\[P] state -> P /\\ state = empty.\n\tProof. intros ?? A; hnf in A; naive_solver. Qed.\n\n\tTheorem state_star_pure_l P H state:\n\t\t(\\[P] \\* H) state <-> P /\\ (H state).\n\tProof.\n\t\tunfold state_pure, state_exists; split.\n-\n\n\nhnf.\nrewrite state_star_exists.\n\n\n-\n\n\n\t\trewrite state_star_exists. rewrite* state_star_empty_l.\n\t\tiff (p&M) (p&M). { split~. } { exists~ p. }\n\tQed.\n\n\tTheorem state_star_pure_r: forall P H h,\n\t\t(H \\* \\[P]) h = (H h /\\ P).\n\tProof.\n\t\tintros. rewrite hstar_comm. rewrite state_star_pure_l. apply* prop_ext.\n\tQed.\n\n\tTheorem himpl_state_star_pure_r: forall P H H',\n\t\tP ->\n\t\t(H ==> H') ->\n\t\tH ==> (\\[P] \\* H').\n\tProof. introv HP W. intros h K. rewrite* state_star_pure_l. Qed.\n\n\tTheorem state_pure_inv_hempty: forall P h,\n\t\t\\[P] h ->\n\t\tP /\\ \\[] h.\n\tProof.\n\t\tintrov M. rewrite <- state_star_pure_l. rewrite~ hstar_hempty_r.\n\tQed.\n\n\tTheorem state_pure_intro_hempty: forall P h,\n\t\t\\[] h ->\n\t\tP ->\n\t\t\\[P] h.\n\tProof.\n\t\tintrov M N. rewrite <- (hstar_hempty_l \\[P]). rewrite~ state_star_pure_r.\n\tQed.\n\n\tTheorem himpl_hempty_state_pure: forall P,\n\t\tP ->\n\t\t\\[] ==> \\[P].\n\tProof. introv HP. intros h Hh. applys* state_pure_intro_hempty. Qed.\n\n\tTheorem himpl_state_star_pure_l: forall P H H',\n\t\t(P -> H ==> H') ->\n\t\t(\\[P] \\* H) ==> H'.\n\tProof.\n\t\tintrov W Hh. rewrite state_star_pure_l in Hh. applys* W.\n\tQed.\n\n\tTheorem hempty_eq_state_pure_true :\n\t\t\\[] = \\[True].\n\tProof.\n\t\tapplys himpl_antisym; intros h M.\n\t\t{ applys* state_pure_intro_hempty. }\n\t\t{ forwards*: state_pure_inv_hempty M. }\n\tQed.\n\n\tTheorem hfalse_hstar_any: forall H,\n\t\t\\[False] \\* H = \\[False].\n\tProof.\n\t\tintros. applys himpl_antisym; intros h; rewrite state_star_pure_l; intros M.\n\t\t{ false*. } { lets: state_pure_inv_hempty M. false*. }\n\tQed.\n\n\t(* ### state_register *)\n\tTheorem state_register_intro: forall register value,\n\t\t($register == value) (machine_state empty {[ register := value ]}).\n\tProof. intros; hnf; auto. Qed.\n\n\tTheorem state_register_inversion: forall register value state,\n\t\t($register == value) state\n\t\t-> state = (machine_state empty {[ register := value ]}).\n\tProof. intros ??? A; hnf in A; auto. Qed.\n\n\tTheorem state_star_register_same register v1 v2:\n\t\t($register == v1) \\* ($register == v2) **> \\[False].\n\tProof.\n\t\thnf; intros ? (s1 & s2 & ?%state_register_inversion & ?%state_register_inversion & [] & _);\n\t\telimtype False; subst; simpl in *;\n\t\teapply map_disjoint_spec; trivial; apply lookup_singleton.\n\tQed.\n\n\t(* ### state_counter *)\n\tTheorem state_counter_intro: forall counter,\n\t\t(pc_at counter) (machine_state {[ 0%fin := counter ]} empty).\n\tProof. intros; hnf; auto. Qed.\n\n\tTheorem state_counter_inversion: forall counter state,\n\t\t(pc_at counter) state\n\t\t-> state = (machine_state {[ 0%fin := counter ]} empty).\n\tProof. intros ?? A; hnf in A; auto. Qed.\n\n\tTheorem state_star_counter v1 v2:\n\t\t(pc_at v1) \\* (pc_at v2) **> \\[False].\n\tProof.\n\t\thnf; intros ? (s1 & s2 & ?%state_counter_inversion & ?%state_counter_inversion & [? _] & _);\n\t\telimtype False; subst; simpl in *;\n\t\teapply map_disjoint_spec; trivial; apply lookup_singleton.\n\tQed.\n\n\t(* ### state_exists *)\n\tTheorem state_exists_intro: forall A (a: A) (P: A -> Assertion) state,\n\t\tP a state -> (\\exists a, P a) state.\n\tProof. intros; hnf; eauto. Qed.\n\n\tTheorem state_exists_inversion: forall X (P: X -> Assertion) state,\n\t\t(\\exists x, P x) state -> exists x, P x state.\n\tProof. intros ??? A; hnf in A; eauto. Qed.\n\n\t(* ### state_forall *)\n\tTheorem state_forall_intro: forall A (P: A -> Assertion) state,\n\t\t(forall a, P a state) -> (state_forall P) state.\n\tProof. intros; hnf; assumption. Qed.\n\n\tTheorem state_forall_inversion: forall A (P: A -> Assertion) state,\n\t\t(state_forall P) state -> forall a, P a state.\n\tProof. intros; hnf; trivial. Qed.\n\n\t(*Theorem state_star_forall H A (P: A -> Assertion):\n\t\t(state_forall P) \\* H **> state_forall (P \\* H).\n\tProof.\n\t\tintros h M. destruct M as (h1&h2&M1&M2&D&U). intros x. exists~ h1 h2.\n\tQed.*)\n\n\tTheorem state_implies_forall_r: forall A (P: A -> Assertion) H,\n\t\t(forall a, H **> P a) -> H **> (state_forall P).\n\tProof. intros ??? M ???; apply M; assumption.  Qed.\n\n\tTheorem state_implies_forall_l: forall A a (P: A -> Assertion) H,\n\t\t(P a **> H) -> (state_forall P) **> H.\n\tProof. intros ???? M ??; apply M; trivial. Qed.\n\n\tTheorem state_forall_specialize: forall A a (P: A -> Assertion),\n\t\t(state_forall P) **> (P a).\n\tProof. intros; apply (state_implies_forall_l a); auto. Qed.\n\n\t(* ### state_wand *)\n\tTheorem state_wand_equiv: forall H0 H1 H2,\n\t\t(H0 **> H1 \\-* H2) <-> (H1 \\* H0 **> H2).\n\tProof.\n\t\tunfold state_wand. iff M.\n\t\t{ rewrite state_star_comm. applys state_implies_state_star_trans_l (rm M).\n\t\t\trewrite state_star_state_exists. applys state_implies_state_exists_l. intros H.\n\t\t\trewrite (state_star_comm H). rewrite state_star_assoc.\n\t\t\trewrite (state_star_comm H H1). applys~ state_implies_state_star_state_pure_l. }\n\t\t{ applys state_implies_state_exists_r H0.\n\t\t\trewrite <- (state_star_hempty_r H0) at 1.\n\t\t\tapplys state_implies_frame_r. applys state_implies_hempty_state_pure M. }\n\tQed.\n\n\tTheorem state_implies_state_wand_r: forall H1 H2 H3,\n\t\tH2 \\* H1 **> H3 ->\n\t\tH1 **> (H2 \\-* H3).\n\tProof. introv M. rewrite~ state_wand_equiv. Qed.\n\n\tTheorem state_implies_state_wand_r_inv: forall H1 H2 H3,\n\t\tH1 **> (H2 \\-* H3) ->\n\t\tH2 \\* H1 **> H3.\n\tProof. introv M. rewrite~ <- state_wand_equiv. Qed.\n\n\tTheorem state_wand_cancel: forall H1 H2,\n\t\tH1 \\* (H1 \\-* H2) **> H2.\n\tProof. intros. applys state_implies_state_wand_r_inv. applys state_implies_refl. Qed.\n\n\tArguments state_wand_cancel: clear implicits.\n\n\tTheorem state_implies_hempty_state_wand_same: forall H,\n\t\t\\[] **> (H \\-* H).\n\tProof. intros. apply state_implies_state_wand_r. rewrite~ state_star_hempty_r. Qed.\n\n\tTheorem state_wand_hempty_l: forall H,\n\t\t(\\[] \\-* H) = H.\n\tProof.\n\t\tintros. applys state_implies_antisym.\n\t\t{ rewrite <- state_star_hempty_l at 1. applys state_wand_cancel. }\n\t\t{ rewrite state_wand_equiv. rewrite~ state_star_hempty_l. }\n\tQed.\n\n\tTheorem state_wand_state_pure_l: forall P H,\n\t\tP ->\n\t\t(\\[P] \\-* H) = H.\n\tProof.\n\t\tintrov HP. applys state_implies_antisym.\n\t\t{ lets K: state_wand_cancel \\[P] H. applys state_implies_trans K.\n\t\t\tapplys* state_implies_state_star_state_pure_r. }\n\t\t{ rewrite state_wand_equiv. applys* state_implies_state_star_state_pure_l. }\n\tQed.\n\n\tTheorem state_wand_curry: forall H1 H2 H3,\n\t\t(H1 \\* H2) \\-* H3 **> H1 \\-* (H2 \\-* H3).\n\tProof.\n\t\tintros. apply state_implies_state_wand_r. apply state_implies_state_wand_r.\n\t\trewrite <- state_star_assoc. rewrite (state_star_comm H1 H2).\n\t\tapplys state_wand_cancel.\n\tQed.\n\n\tTheorem state_wand_uncurry: forall H1 H2 H3,\n\t\tH1 \\-* (H2 \\-* H3) **> (H1 \\* H2) \\-* H3.\n\tProof.\n\t\tintros. rewrite state_wand_equiv. rewrite (state_star_comm H1 H2).\n\t\trewrite state_star_assoc. applys state_implies_state_star_trans_r.\n\t\t{ applys state_wand_cancel. } { applys state_wand_cancel. }\n\tQed.\n\n\tTheorem state_wand_curry_eq: forall H1 H2 H3,\n\t\t(H1 \\* H2) \\-* H3 = H1 \\-* (H2 \\-* H3).\n\tProof.\n\t\tintros. applys state_implies_antisym.\n\t\t{ applys state_wand_curry. }\n\t\t{ applys state_wand_uncurry. }\n\tQed.\n\n\tTheorem state_wand_inv: forall h1 h2 H1 H2,\n\t\t(H1 \\-* H2) h2 ->\n\t\tH1 h1 ->\n\t\tFmap.disjoint h1 h2 ->\n\t\tH2 (h1 \\u h2).\n\tProof.\n\t\tintrov M2 M1 D. unfolds state_wand. lets (H0&M3): state_exists_inv M2.\n\t\tlets (h0&h3&P1&P3&D'&U): state_star_inv M3. lets (P4&E3): state_pure_inv P3.\n\t\tsubst h2 h3. rewrite union_empty_r in *. applys P4. applys* state_star_intro.\n\tQed.\n\nEnd AbstractMachine.\n"
  },
  {
    "path": "old/main.md",
    "content": "```v\nTheorem absurd_stuck instr:\n  ~(stopping instr)\n  -> forall program cur,\n  (cur_instr cur program) = Some instr\n  -> (forall next, ~(@step program cur next))\n  -> False.\nProof.\n  intros Hstopping ?? Hcur Hstuck;\n  specialize (not_stopping_not_stuck Hstopping program cur Hcur) as [next];\n  specialize Hstuck with next; contradiction.\nQed.s\n\n\n\nTheorem absurd_well_founded_minimal {T} (P: T -> T -> Prop) (least other: T):\n  well_founded P\n  -> P least other\n  -> ~(P other least).\nProof.\nintros.\n\n\nQed.\n\nSection well_founded_compatibility.\n  Variable A B: Type.\n  Variable RA: A -> A -> Prop.\n  Variable RB: B -> B -> Prop.\n\n  Variable RB_well_founded: well_founded RB.\n  Variable f: A -> B.\n  Hypothesis H_compat_A: forall a1 a2: A, (RA a1 a2) -> (RB (f a1) (f a2)).\n  Hypothesis H_compat_B: forall a1 a2: A, (RB (f a1) (f a2)) -> (RA a1 a2).\n\n  https://github.com/charguer/tlc/blob/master/src/LibWf.v\n\n  <!-- you could maybe use this induction principle -->\n  https://coq.inria.fr/library/Coq.Init.Wf.html\n  <!-- Theorem well_founded_ind :\n    forall P:A -> Prop,\n      (forall x:A, (forall y:A, R y x -> P y) -> P x) -> forall a:A, P a. -->\n\n  Theorem yo:\n    forall min other, RB (f min) (f other) -> Acc RA min.\n  Proof.\nintros ?? HRB.\napply H_compat_B in HRB.\nHint Constructors Acc: core.\n\n\n  Qed.\n\n\n  Theorem well_founded_compat: well_founded RA.\n  Proof.\nconstructor.\n\nunfold well_founded in *.\nconstructor.\nintros.\nrename y into a1; rename a into a2.\n\nspecialize (H_compat a1 a2).\nremember (f a1) as b1; remember (f a2) as b2.\ndestruct H_compat as [H_compat_A H_compat_B].\nspecialize (H_compat_A H) as ?.\n\n\nspecialize (RB_well_founded b1) as Hb1.\nspecialize (RB_well_founded b2) as Hb2.\ninversion Hb1.\n\n\nspecialize (Acc_inv RB_well_founded).\n\n  Qed.\n\nCheck RB_well_founded.\n\nEndSection well_founded_compatibility.\n```\n"
  },
  {
    "path": "old/main.v",
    "content": "Add LoadPath \"/home/blaine/lab/cpdtlib\" as Cpdt.\nSet Implicit Arguments. Set Asymmetric Patterns.\nRequire Import List String Cpdt.CpdtTactics Coq.Program.Wf.\nFrom stdpp Require Import base fin vector options.\nImport ListNotations.\nRequire Import theorems.utils.\n\n\nSection Sized.\n\tContext {size: nat}.\n\n\tNotation register := (fin size).\n\tRecord MachineState := machine_state {\n\t\tcounter: nat;\n\t\tregisters: (vec nat size);\n\t\tprogram_memory: list Instruction\n\t}.\n\n\tInductive Operand: Type :=\n\t\t| Literal (n: nat)\n\t\t| Register (r: register)\n\t.\n\n\tDefinition eval_operand\n\t\t(cur: MachineState) (operand: Operand)\n\t: nat :=\n\t\tmatch operand with\n\t\t| Literal n => n\n\t\t| Register r => (cur.(registers) !!! r)\n\t\tend\n\t.\n\n\tInductive Instruction :=\n\t\t| InstExit\n\t\t| InstMov (src: Operand) (dest: register)\n\t\t| InstAdd (val: Operand) (dest: register)\n\n\t\t(*| InstJump (to: nat)*)\n\t\t(*| InstBranchEq (a: Operand) (b: Operand) (to: nat)*)\n\t\t(*| InstBranchNeq (a: Operand) (b: Operand) (to: nat)*)\n\n\t\t(*| InstStore (src: Operand) (dest: Operand)*)\n\t\t(*| InstLoad (src: Operand) (dest: register)*)\n\t.\n\tHint Constructors Instruction: core.\n\n\tNotation Within cur :=\n\t\t(cur.(counter) < (length cur.(program_memory))) (only parsing).\n\n\tNotation cur_instr cur :=\n\t\t(lookup cur.(counter) cur.(program_memory)) (only parsing).\n\n\tNotation get_instr cur :=\n\t\t(@safe_lookup _ cur.(counter) cur.(program_memory) _) (only parsing).\n\n\tNotation get cur reg :=\n\t\t(cur.(registers) !!! reg) (only parsing).\n\n\tNotation update cur dest val :=\n\t\t(vinsert dest val cur.(registers)) (only parsing).\n\n\tNotation incr cur :=\n\t\t(S cur.(counter)) (only parsing).\n\n\tInductive Step: MachineState -> MachineState -> Prop :=\n\t\t| Step_Mov: forall cur src dest,\n\t\t\t(cur_instr cur) = Some (InstMov src dest)\n\t\t\t-> Step program cur (machine_state\n\t\t\t\t(incr cur)\n\t\t\t\t(update cur dest (eval_operand cur src))\n\t\t\t)\n\n\t\t| Step_Add: forall cur val dest,\n\t\t\t(cur_instr cur) = Some (InstAdd val dest)\n\t\t\t-> Step program cur (machine_state\n\t\t\t\t(incr cur)\n\t\t\t\t(update cur dest ((eval_operand cur val) + (get cur dest)))\n\t\t\t)\n\n\t\t(*| Step_Jump: forall cur to,\n\t\t\t(cur_instr cur program) = Some (InstJump to)\n\t\t\t-> Step program cur (machine_state to cur.(registers))*)\n\n\t\t(*| Step_BranchEq: forall cur a b to,\n\t\t\t(cur_instr cur program) = Some (InstBranchEq a b to)\n\t\t\t-> IF (a = b)\n\t\t\t\tthen Step program cur (machine_state to cur.(registers))\n\t\t\t\telse Step program cur (machine_state (incr cur) cur.(registers))*)\n\n\t\t(*| Step_BranchNeq: forall cur a b to,\n\t\t\t(cur_instr cur program) = Some (InstBranchNeq a b to)\n\t\t\t-> IF (a = b)\n\t\t\t\tthen Step program cur (machine_state (incr cur) cur.(registers))\n\t\t\t\telse Step program cur (machine_state to cur.(registers))*)\n\t.\n\tHint Constructors Step: core.\n\n\tTheorem Step_always_Within program cur next:\n\t\tStep program cur next -> Within program cur.\n\tProof. inversion 1; eauto using lookup_lt_Some. Qed.\n\n\n\tInductive stopping: Instruction -> Prop :=\n\t\t| stopping_Exit: stopping InstExit\n\t.\n\tHint Constructors stopping: core.\n\tDefinition is_stopping: forall instr, {stopping instr} + {~(stopping instr)}.\n\t\trefine (fun instr =>\n\t\t\tmatch instr with | InstExit => Yes | _ => No end\n\t\t); try constructor; inversion 1.\n\tDefined.\n\n\tTheorem stopping_stuck instr:\n\t\tstopping instr\n\t\t-> forall program cur next,\n\t\t(cur_instr cur program) = Some instr\n\t\t-> ~(Step program cur next).\n\tProof.\n\t\tintros Hstopping ???? HStep;\n\t\tinversion Hstopping; inversion HStep; naive_solver.\n\tQed.\n\n\tTheorem not_stopping_not_stuck instr:\n\t\t~(stopping instr)\n\t\t-> forall program cur,\n\t\t(cur_instr cur program) = Some instr\n\t\t-> exists next, Step program cur next.\n\tProof.\n\t\tdestruct instr; try contradiction; eauto.\n\tQed.\n\n\tInductive branching: Instruction -> Prop :=\n\t\t(*| branch_BranchEq: forall a b to, branching (InstBranchEq a b to)*)\n\t\t(*| branch_BranchNeq: forall a b to, branching (InstBranchNeq a b to)*)\n\t\t(*| branch_Jump: forall to, branching (InstJump to)*)\n\t.\n\tHint Constructors branching: core.\n\tDefinition is_branching: forall instr, {branching instr} + {~(branching instr)}.\n\t\trefine (fun instr =>\n\t\t\tmatch instr with\n\t\t\t\t(*| InstBranchEq _ _ _ => Yes*)\n\t\t\t\t(*| InstBranchNeq _ _ _ => Yes*)\n\t\t\t\t(*| InstJump _ => Yes*)\n\t\t\t\t| _ => No\n\t\t\tend\n\t\t); inversion 1.\n\tDefined.\n\n\tInductive sequential: Instruction -> Prop :=\n\t\t| sequential_Mov: forall src dest, sequential (InstMov src dest)\n\t\t| sequential_Add: forall val dest, sequential (InstAdd val dest)\n\t.\n\tHint Constructors sequential: core.\n\t(*Definition is_sequential*)\n\n\tTheorem sequential_always_next instr:\n\t\tsequential instr\n\t\t-> forall (program: list Instruction) cur next,\n\t\t(cur_instr cur program) = Some instr\n\t\t-> Step program cur next\n\t\t-> counter next = S (counter cur).\n\tProof.\n\t\tintros ????? HStep; destruct instr; inversion HStep; auto.\n\tQed.\n\n\tNotation segment_sequential segment := (Forall sequential segment).\n\n\n\tNotation NextStep program instr cur next :=\n\t\t((cur_instr cur (program%list)) = Some instr -> Step (program%list) cur next)\n\t\t(only parsing).\n\n\tDefinition execute_instruction:\n\t\tforall instr (cur: MachineState), ~stopping instr\n\t\t-> {next: MachineState | forall program, NextStep program instr cur next}\n\t.\n\t\trefine (fun instr cur =>\n\t\t\tmatch instr with\n\t\t\t| InstMov src dest => fun _ => this (machine_state\n\t\t\t\t(incr cur)\n\t\t\t\t(update cur dest (eval_operand cur src))\n\t\t\t)\n\t\t\t| InstAdd val dest => fun _ => this (machine_state\n\t\t\t\t(incr cur)\n\t\t\t\t(update cur dest ((eval_operand cur val) + (get cur dest)))\n\t\t\t)\n\t\t\t| _ => fun _ => impossible\n\t\t\tend\n\t\t); destruct instr; try contradiction; auto.\n\tDefined.\n\n\tInductive Steps\n\t (program: list Instruction)\n\t: list MachineState -> MachineState -> Prop :=\n\t\t| Steps_start: forall start,\n\t\t\tSteps program cur []\n\n\t\t| Steps_Step: forall start steps prev cur,\n\t\t\tSteps program start steps prev\n\t\t\t-> Step program prev cur\n\t\t\t-> Steps program start (steps ++ [prev]) cur\n\t.\n\n\t(*\n\tInductive Trace: list MachineState -> Prop :=\n\t\t| Trace_start: forall start,\n\t\t\tTrace [start]\n\n\t\t| Trace_step: forall past cur next,\n\t\t\tTrace (past ++ [cur])\n\t\t\t-> Step cur next\n\t\t\t-> Trace (past ++ [cur] ++ [next])\n\t.\n\n\tTheorem Trace_transitive: forall before mid after,\n\t\tTrace (before ++ [mid])\n\t\t-> Trace ([mid] ++ after)\n\t\t-> Trace (before ++ [mid] ++ after).\n\tProof. Qed.\n\t*)\n\n\tTheorem Steps_start_inversion program cur next:\n\t\tSteps program cur [] next -> Step program cur next.\n\tProof.\n\t\tinversion 1; subst; trivial;\n\t\tapply app_eq_nil in H0 as [_ Hfalse]; discriminate Hfalse.\n\tQed.\n\n\tTheorem Steps_connect_tail_last program first steps tail last:\n\t\tSteps program first (steps ++ [tail]) last -> Step program tail last.\n\tProof.\n\t\tinversion 1; subst.\n\t\t- apply app_cons_not_nil in H0; contradiction.\n\t\t- apply app_inj_tail in H0 as []; subst; assumption.\n\tQed.\n\n\tTheorem Steps_connect_first_head program first head steps last:\n\t\tSteps program first ([head] ++ steps) last -> Step program first head.\n\tProof.\ninversion 1; subst.\n\n\nsubst.\n\n\nintros H; induction H.\n-\nadmit.\n-\nassumption.\n\n\ngeneralize dependent steps.\ninduction steps.\n-\ninversion 1.\nsubst.\napply app_singleton in H0 as [[]|[]]; subst.\n+ subst_injection H2; inversion H1; subst; auto; apply Steps_start_inversion; assumption.\n+ discriminate H2.\n\n-\nintros.\n\n\n\tQed.\n\n\n\n\t(*Theorem Steps_append program first steps1 meet steps2 last:\n\t\tSteps program first steps1 meet\n\t\t-> Steps program meet steps2 last\n\t\t-> Steps program first (steps1 ++ [meet] ++ steps2) last.\n\tProof.\n\t\tintros.\n\tQed.*)\n\n\tTheorem list_split_around_meet {T} items:\n\t\tforall n (Hlength: n < length items),\n\t\texists meet, meet = safe_lookup steps Hlength\n\t\t/\\ items = (take n items) ++ [use meet] ++ (drop (S n) items).\n\n\tTheorem Steps_split program first steps last:\n\t\tSteps program first steps last\n\t\t-> forall n (Hlength: n < length steps),\n\t\texists meet,\n\t\t\tmeet = safe_lookup steps Hlength\n\t\t\t/\\ Steps program first (take n steps) (use meet)\n\t\t\t/\\ Steps program (use meet) (drop (S n) steps) last.\n\tProof.\n\t\tintros.\n\tQed.\n\n\n\tDefinition execute_eternal\n\t\t(program: list Instruction)\n\t\t(well_formed: WellFormed program)\n\t\t(start: MachineState)\n\t: forall previous cur, Within program cur -> .\n\t\trefine (cofix execute_eternal previous cur _ _ :=\n\t\t\tlet (instr, _) := (get_instr cur program) in\n\t\t\tif (is_stopping instr) then (previous ++ cur)\n\t\t\telse\n\t\t\t\tlet (next, _) := (@execute_instruction instr cur _) in\n\t\t\t\texecute_eternal next _\n\t\t)\n\tDefined.\n\n\tTheorem Steps_deterministic: forall program start between1 last1 between2 last2,\n\t\tSteps program start between1 last1\n\t\t-> Steps program start between2 last2\n\t\t-> length between1 = length between2\n\t\t-> last1 = last2 /\\ between1 = between2.\n\tProof.\n\tQed.\n\n\n\tDefinition stateprop := MachineState -> Prop.\n\n\t(* first the partial correctness version *)\n\tDefinition triple (block: list Instruction) (H Q: stateprop) :=\n\t\tforall prefix postfix first last between,\n\t\t\tH first\n\t\t\t-> Steps (prefix ++ block ++ postfix) first between last\n\t\t\t-> Q last.\n\n\tDefinition triple (block: list Instruction) (H Q: stateprop) :=\n\t\tforall prefix postfix first, H first -> Within program first\n\t\t\t-> exists between last,\n\t\t\t\tSteps (prefix ++ block ++ postfix) first between last\n\t\t\t\t/\\ Q last.\n\n\tDefinition exiting_triple (block: list Instruction).\n\n\n\n\n\t(*\n\t\tsome concept I probably need is an idea of a Steps or Trace being \"within\" some program segment, as in all the machine states in that trace have program counters in the segment, so I can reason about \"exiting\" the segment,\n\t\talso theorems about concatenation of traces, so I can do things like \"the beginning of this trace is all within this segment, but this concatened head state isn't, therefore we've exited the segment\"\n\t*)\n\n\n\t(*Theorem Trace_to_Step:\n\t\tforall program start steps cur,\n\t\t\tTrace program (steps ++ [start]) (Some cur)\n\t\t\t-> Steps program start steps cur.\n\tProof.\n\tQed.\n\n\tTheorem Steps_to_Trace:\n\t\tforall program start steps cur,\n\t\t\tSteps program start steps cur\n\t\t\t-> Trace program (steps ++ [start]) (Some cur).\n\tProof.\n\tQed.*)\n\n\n\t(*\n\t\tthings to prove using Trace:\n\t\t- if a trace is currently Trace_exit, then the program is stuck\n\t\t- `execute_unsafe_eternal` is approximated by the non-eternal version, and if it returns None the program isn't well-formed and there isn't a possible next step\n\t\t- `execute_eternal` is approximated by the non-eternal version, and if it returns None there doesn't exist a possible next step\n\t\t- a well_founded relation on the program step relation implies there exists a finite number of steps such that `n = (length states), Trace states None`. also execute_program perfectly defines execute_eternal in this situation\n\t*)\n\n\n\n\t(*\n\t\tI think what I want is this:\n\t\t- first just *local*, as in single instruction, versions of total state assertions (hoare triples) and framed state assertions (separation logic triples)\n\t\t- somehow tie those together with Trace?\n\t*)\n\n\t(*Notation stateprop := (MachineState -> Prop) (only parsing).*)\n\n\t(*hoare triples assert over total state, separation triples assert over the given state and all other states*)\n\n\n\t(*Definition execute_eternal\n\t\tprogram (well_formed: WellFormed program)\n\t: MachineState -> Step_stream program.\n\t\trefine (cofix execute_eternal cur =>\n\t\t\tlet (instr, _) = safe_lookup cur program in\n\t\t\tif (is_stopping instr) then\n\t\t\telse\n\t\t)\n\tDefined.\n\n\tCoFixpoint execute_eternal\n\t\tprogram (H: WellFormed program): Step_stream program\n\t:=\n\t\tdo_Start H\n\t.*)\n\n\n\tDefinition execute_program_unsafe\n\t\t(program: list Instruction)\n\t:\n\t\tnat -> MachineState -> option MachineState\n\t.\n\t\trefine (fix go Steps cur :=\n\t\t\tmatch (cur_instr cur program) with\n\t\t\t| None => None\n\t\t\t| Some instr =>\n\t\t\t\tif (is_stopping instr) then Some cur\n\t\t\t\telse match Steps with\n\t\t\t\t| 0 => None\n\t\t\t\t| S Steps' =>\n\t\t\t\t\tlet (next, _) := (@execute_instruction instr cur _) in\n\t\t\t\t\tgo Steps' next\n\t\t\t\tend\n\t\t\tend\n\t\t); assumption.\n\tDefined.\n\n\tNotation WellFormed program :=\n\t\t(forall cur next, Step program cur next -> Within program next)\n\t\t(only parsing).\n\n\tNotation InstWellFormed len_program := (fun index instr =>\n\t\tforall program cur next,\n\t\tlen_program <= (length program)\n\t\t-> lookup (index%nat) program = Some instr\n\t\t-> cur.(counter) = (index%nat)\n\t\t-> Step program cur next\n\t\t-> Within program next\n\t) (only parsing).\n\n\tTheorem Step_implies_instr program cur next:\n\t\tStep program cur next -> exists instr, (cur_instr cur program) = Some instr.\n\tProof. intros []; eauto. Qed.\n\n\tNotation IndexPairsWellFormed program :=\n\t\t(fun index_instr => InstWellFormed (length program) index_instr.1 index_instr.2)\n\t\t(only parsing).\n\n\tTheorem index_pairs_InstWellFormed_implies_WellFormed program:\n\t\tForall (IndexPairsWellFormed program) (imap pair program)\n\t\t-> WellFormed program.\n\tProof.\n\t\tintros H ?? HStep; rewrite Forall_lookup in H;\n\t\tspecialize (Step_implies_instr HStep) as [instr];\n\t\tspecialize (H cur.(counter) (cur.(counter), instr));\n\t\teapply H; eauto; apply index_pairs_lookup_forward; assumption.\n\tQed.\n\n\tDefinition check_instruction_well_formed len_program:\n\t\tforall index_instr, partial (InstWellFormed len_program index_instr.1 index_instr.2)\n\t.\n\t\trefine (fun index_instr =>\n\t\t\tif (is_stopping index_instr.2) then proven\n\t\t\telse if (lt_dec (S index_instr.1) len_program) then proven else unknown\n\t\t\t(*if (is_sequential instr)*)\n\t\t);\n\t\tdestruct index_instr as [index instr]; simpl in *;\n\t\tintros ???? Hsome Hcounter HStep; subst;\n\t\ttry apply (stopping_stuck s Hsome) in HStep;\n\t\tdestruct instr; inversion HStep; try contradiction; simpl in *; subst; lia.\n\tDefined.\n\n\tDefinition execute_program_unknown_termination\n\t\t(program: list Instruction)\n\t\t(well_formed: WellFormed program)\n\t:\n\t\tnat -> forall cur, Within program cur -> option MachineState\n\t.\n\t\trefine (fix go steps cur _ :=\n\t\t\tlet (instr, _) := (get_instr cur program) in\n\t\t\tif (is_stopping instr) then Some cur\n\t\t\telse match steps with\n\t\t\t| 0 => None\n\t\t\t| S steps' =>\n\t\t\t\tlet (next, _) := (@execute_instruction instr cur _) in\n\t\t\t\tgo steps' next _\n\t\t\tend\n\t\t); eauto.\n\tDefined.\n\n\tSection execute_program.\n\t\tVariable program: list Instruction.\n\t\tVariable well_formed: WellFormed program.\n\n\t\tVariable progression: MachineState -> MachineState -> Prop.\n\t\tVariable progression_wf: well_founded progression.\n\t\tVariable progress: forall cur next, Step program cur next -> progression next cur.\n\n\t\tProgram Fixpoint execute_program\n\t\t\tcur (H: Within program cur) {wf progression cur}\n\t\t: MachineState :=\n\t\t\tlet (instr, _) := (get_instr cur program) in\n\t\t\tif (is_stopping instr) then cur\n\t\t\telse\n\t\t\t\tlet (next, _) := (@execute_instruction instr cur _) in\n\t\t\t\texecute_program next _\n\t\t.\n\t\tSolve All Obligations with eauto.\n\tEnd execute_program.\n\nEnd Sized.\n\n(*Arguments Literal {size} _.\nArguments Register {size} _.\n\nArguments execute_program_unsafe {size} _ _ _.\n\nArguments InstMov {size} _ _.\nArguments InstAdd {size} _ _.\nArguments InstBranchEq {size} _ _ _.\nArguments InstBranchNeq {size} _ _ _.\nArguments InstExit {size}.*)\n\nNotation Within program cur :=\n\t(cur.(counter) < (length program)) (only parsing).\n\nNotation WellFormed program :=\n\t(forall cur next, Step program cur next -> Within program next)\n\t(only parsing).\n\nNotation InstWellFormed len_program := (fun index instr =>\n\tforall program cur next,\n\tlen_program <= (length program)\n\t-> lookup (index%nat) program = Some instr\n\t-> cur.(counter) = (index%nat)\n\t-> Step program cur next\n\t-> Within program next\n) (only parsing).\n\nNotation IndexPairsWellFormed program :=\n\t(fun index_instr => InstWellFormed (length program) index_instr.1 index_instr.2)\n\t(only parsing).\n\nLtac program_well_formed :=\n\tmatch goal with\n\t| |- WellFormed ?program =>\n\t\tlet program_type := type of program in\n\t\tmatch program_type with | list (@Instruction ?size) =>\n\t\t\tapply index_pairs_InstWellFormed_implies_WellFormed;\n\t\t\tfind_obligations__helper\n\t\t\t\t(IndexPairsWellFormed program)\n\t\t\t\t(@check_instruction_well_formed size (length program))\n\t\t\t\t(imap pair program)\n\t\tend\n\tend.\n\n\nModule redundant_additions.\n\tDefinition program: list (@Instruction 1) := [\n\t\tInstMov (Literal 0) (0%fin);\n\t\tInstAdd (Literal 1) (0%fin);\n\t\tInstAdd (Literal 1) (0%fin);\n\t\tInstAdd (Literal 1) (0%fin);\n\t\tInstAdd (Literal 1) (0%fin);\n\t\tInstAdd (Literal 1) (0%fin);\n\t\tInstExit\n\t].\n\tTheorem well_formed: WellFormed program. Proof. program_well_formed. Qed.\n\tTheorem within: Within program (state 0 [#0]). Proof. simpl; lia. Qed.\n\n\tExample test:\n\t\texecute_program_unknown_termination\n\t\t\twell_formed (length program) (state 0 [#0]) within\n\t\t= Some (state 6 [#5]).\n\tProof. reflexivity. Qed.\nEnd redundant_additions.\n\nModule redundant_doubling.\n\tDefinition program: list (@Instruction 1) := [\n\t\tInstMov (Literal 1) (0%fin);\n\t\tInstAdd (Register 0%fin) (0%fin);\n\t\tInstAdd (Register 0%fin) (0%fin);\n\t\tInstAdd (Register 0%fin) (0%fin);\n\t\tInstExit\n\t].\n\tTheorem well_formed: WellFormed program. Proof. program_well_formed. Qed.\n\tTheorem within: Within program (state 0 [#0]). Proof. simpl; lia. Qed.\n\n\tExample test:\n\t\texecute_program_unknown_termination\n\t\t\twell_formed (length program) (state 0 [#0]) within\n\t\t= Some (state 4 [#8]).\n\tProof. reflexivity. Qed.\nEnd redundant_doubling.\n\n\n(*Notation val := Operand (only parsing).\nNotation expr := Instruction (only parsing).\n\nNotation of_val := InstExit (only parsing).\n\nDefinition to_val (e: expr): option val :=\n\tmatch e with\n\t| InstExit _ v => Some v\n\t| _ => None\n\tend.\n*)\n(*\n\tSo the first program I'm interested in verifying is this one.\n\tI want to obviously verify it's safe and such, but also I want to be\n\nmain: (this label is implicit)\n\t{{ True }}\n\tMov 0 $r1\n\t{{ $r1 = 0 }}\nloop:\n\t{{ exists n < 10, $r1 = n }}\n\tAdd 1 $r1\n\t{{ exists n <= 10, $r1 = n + 1}}\n\tBranchNeq $r1 10 loop\ndone:\n\t{{ $r1 = 10 }}\n\tExit\n*)\n\n\n(*\n(*CoInductive Trace\n\t(program: list Instruction)\n: list MachineState -> option MachineState -> Prop :=\n\t| Trace_start: forall start,\n\t\tWithin program start\n\t\t-> Trace program [] (Some start)\n\n\t| Trace_step: forall prev cur next,\n\t\tTrace program prev (Some cur)\n\t\t-> Step program cur next\n\t\t-> Trace program (cur :: prev) (Some next)\n\n\t| Trace_exit: forall prev cur,\n\t\tTrace program prev (Some cur)\n\t\t-> (cur_instr cur program) = Some InstExit\n\t\t-> Trace program (cur :: prev) None\n.*)\n*)\n\n\n\n\nDefinition program_fizzbuzz := [\n\t(* we accept the input n in register 1 *)\n\t(* we then increment register 2 *)\n\t\"begin\"\n\t\t(InstMov 0 \"$2\")\n\n\t\"main\"\n\t\t(InstAdd 1 \"$2\")\n\n\t\t(* prepare for test_3 by moving our increment into register 3 *)\n\t\t(InstMov \"$2\" \"$3\")\n\t\"test_3\"\n\t\t(* TODO if the number is already less than 3, what's the semantics here? *)\n\t\t(InstSub 3 \"$3\")\n\t\t(* if the current remainder is greater than or equal to 3, we have to keep iterating *)\n\t\t(InstBranchLt 3 \"$3\" \"test_3\")\n\t\t(* otherwise we continue *)\n\t\t(* if the remainder isn't 0, then we skip printing \"Fizz\" *)\n\t\t(InstBranchNeq \"$2\" \"$3\" \"after_fizz\")\n\t\t(InstPrint \"fizz\")\n\n\t\"after_fizz\"\n\t\t(* prepare for test_5 by moving our increment into register 3 *)\n\t\t(InstMov \"$2\" \"$3\")\n\t\"test_5\"\n\t\t(InstSub 5 \"$3\")\n\t\t(* if the current remainder is greater than or equal to 5, we have to keep iterating *)\n\t\t(InstBranchLt 5 \"$3\" \"test_5\")\n\t\t(* if the remainder isn't 0, then we skip printing \"Buzz\" *)\n\t\t(InstBranchNeq \"$2\" \"$3\" \"after_buzz\")\n\t\t(InstPrint \"buzz\")\n\n\t\"after_buzz\"\n\t\t(* if our increment is less than the input, rerun the loop *)\n\t\t(InstBranchLt \"$2\" \"$1\" \"main\")\n\t\t(* otherwise fall through to exit *)\n\t\t(* should we actually literally exit? *)\n\t\t(InstExit)\n].\n"
  },
  {
    "path": "old/parser_low.rs",
    "content": "// use self::{Instruction::*, Value::*};\n// use anyhow::Error;\n// use inkwell::{context::Context, types::IntType, values::IntValue};\n// use nom::{\n// \tbranch::alt,\n// \tbytes::complete::tag,\n// \tcharacter::complete,\n// \tcombinator::map,\n// \tmulti::separated_list0,\n// \tsequence::{preceded, tuple},\n// \tFinish, IResult,\n// };\n// use ocaml_interop::{\n// \timpl_conv_ocaml_variant, ocaml_export, OCaml, OCamlInt, OCamlList, OCamlRef, ToOCaml,\n// };\n// use std::{collections::HashMap, fs::read, path::Path, str::from_utf8};\n\n// #[derive(Clone, Copy, Debug, PartialEq)]\n// pub enum Value {\n// \tConst(i32),\n// \tRef(i32),\n// }\n\n// #[derive(Clone, Copy, Debug, PartialEq)]\n// pub enum Instruction {\n// \tReturn(Value),\n// \tAdd(i32, Value, Value),\n// }\n\n// impl_conv_ocaml_variant! {\n// \tValue {\n// \t\tConst(v: OCamlInt),\n// \t\tRef(r: OCamlInt),\n// \t}\n// }\n\n// impl_conv_ocaml_variant! {\n// \tInstruction {\n// \t\tReturn(v: Value),\n// \t\tAdd( result: OCamlInt, op1: Value, op2: Value ),\n// \t}\n// }\n\n// pub fn parse_file(filename: &str) -> Result<Vec<Instruction>, Error> {\n// \tparse(from_utf8(read(filename)?.as_slice())?)\n// }\n\n// fn parse(i: &str) -> Result<Vec<Instruction>, Error> {\n// \t// eg:\n// \t//   %0 = 1 + 1\n// \t//   %1 = %0 + 1\n// \t//   return %1\n// \tOk(separated_list0(tag(\"\\n\"), instruction)(i)\n// \t\t.map_err(|err| err.to_owned())\n// \t\t.finish()\n// \t\t.map(|x| x.1)?)\n// }\n\n// fn constant(i: &str) -> IResult<&str, i32> {\n// \t// eg. 42\n// \tcomplete::i32(i)\n// }\n\n// fn reference(i: &str) -> IResult<&str, i32> {\n// \t// eg. %2\n// \tpreceded(tag(\"%\"), complete::i32)(i)\n// }\n\n// fn value(i: &str) -> IResult<&str, Value> {\n// \talt((map(constant, Const), map(reference, Ref)))(i)\n// }\n\n// fn add(i: &str) -> IResult<&str, Instruction> {\n// \t// eg. %1 = 3 + %0\n// \tmap(\n// \t\ttuple((reference, tag(\" = \"), value, tag(\" + \"), value)),\n// \t\t|(result, _, op1, _, op2)| Add(result, op1, op2),\n// \t)(i)\n// }\n\n// fn ret(i: &str) -> IResult<&str, Instruction> {\n// \t// eg. return 4\n// \tmap(preceded(tag(\"return \"), value), Return)(i)\n// }\n\n// fn instruction(i: &str) -> IResult<&str, Instruction> {\n// \talt((add, ret))(i)\n// }\n\n// pub fn emit_to_file<P: AsRef<Path>>(instructions: &[Instruction], to: P) {\n// \tlet context = Context::create();\n// \tlet module = context.create_module(\"lab\");\n// \tlet builder = context.create_builder();\n\n// \tlet i32_type = context.i32_type();\n// \tlet fn_type = i32_type.fn_type(&[], false);\n// \tlet function = module.add_function(\"main\", fn_type, None);\n// \tlet basic_block = context.append_basic_block(function, \"doit\");\n\n// \tbuilder.position_at_end(basic_block);\n\n// \tlet mut env = (HashMap::new(), i32_type);\n// \tfn val<'ctx>(env: &(HashMap<i32, IntValue<'ctx>>, IntType<'ctx>), v: Value) -> IntValue<'ctx> {\n// \t\tmatch v {\n// \t\t\tConst(i) => env.1.const_int(i as u64, false),\n// \t\t\tRef(r) => *env.0.get(&r).unwrap(),\n// \t\t}\n// \t}\n\n// \tfor instruction in instructions {\n// \t\tmatch instruction {\n// \t\t\tReturn(v) => {\n// \t\t\t\tbuilder.build_return(Some(&val(&env, *v)));\n// \t\t\t\tbreak;\n// \t\t\t}\n// \t\t\tAdd(result, op1, op2) => {\n// \t\t\t\tlet a = val(&env, *op1);\n// \t\t\t\tlet b = val(&env, *op2);\n// \t\t\t\tenv.0.insert(*result, builder.build_int_add(a, b, \"\"));\n// \t\t\t}\n// \t\t}\n// \t}\n\n// \tmodule.write_bitcode_to_path(to.as_ref());\n// }\n\n// pub fn parse_file_and_emit(filename: &str) -> Result<Vec<Instruction>, Error> {\n// \tlet prog = parse_file(filename)?;\n// \temit_to_file(&prog, Path::new(filename).with_extension(\"bc\"));\n// \tOk(prog)\n// }\n\n// ocaml_export! {\n// \tfn rust_parse(cr, expr: OCamlRef<String>) -> OCaml<Result<OCamlList<Instruction>, String>> {\n// \t\tlet expr: String = expr.to_rust(&cr);\n// \t\tparse(expr.as_str()).map_err(|err| format!(\"{:#}\", err)).to_ocaml(cr)\n// \t}\n\n// \tfn rust_parse_file(cr, filename: OCamlRef<String>) -> OCaml<Result<OCamlList<Instruction>, String>> {\n// \t\tlet filename: String = filename.to_rust(&cr);\n// \t\tparse_file(filename.as_str()).map_err(|err| format!(\"{:#}\", err)).to_ocaml(cr)\n// \t}\n\n// \tfn rust_parse_file_and_emit(cr, filename: OCamlRef<String>) -> OCaml<Result<OCamlList<Instruction>, String>> {\n// \t\tlet filename: String = filename.to_rust(&cr);\n// \t\tparse_file_and_emit(filename.as_str()).map_err(|err| format!(\"{:#}\", err)).to_ocaml(cr)\n// \t}\n// }\n\n// #[cfg(test)]\n// mod tests {\n// \tuse crate::*;\n// \tuse std::{fmt::Debug, fs};\n\n// \tfn err_to_string<T, E: Debug>(r: Result<T, E>) -> Result<T, String> {\n// \t\tr.map_err(|err| format!(\"{:?}\", err))\n// \t}\n\n// \tmacro_rules! test_parse {\n// \t\t($name:ident, $in:expr, $out:expr) => {\n// \t\t\t#[test]\n// \t\t\tfn $name() {\n// \t\t\t\tassert_eq!(err_to_string(parse($in)), $out);\n// \t\t\t}\n// \t\t};\n// \t}\n\n// \ttest_parse!(noop, \"\", Ok(vec![]));\n// \ttest_parse!(four, \"return 4\", Ok(vec![Return(Const(4))]));\n// \ttest_parse!(\n// \t\tplus,\n// \t\t\"%0 = 2 + 3\\nreturn %0\",\n// \t\tOk(vec![Add(0, Const(2), Const(3)), Return(Ref(0))])\n// \t);\n// \ttest_parse!(\n// \t\tnest_plus,\n// \t\t\"%0 = 2 + 3\n// %1 = 1 + %0\n// %2 = %1 + 4\n// %3 = 0 + %2\n// return %3\",\n// \t\tOk(vec![\n// \t\t\tAdd(0, Const(2), Const(3)),\n// \t\t\tAdd(1, Const(1), Ref(0)),\n// \t\t\tAdd(2, Ref(1), Const(4)),\n// \t\t\tAdd(3, Const(0), Ref(2)),\n// \t\t\tReturn(Ref(3)),\n// \t\t])\n// \t);\n\n// \t#[test]\n// \tfn from_file() {\n// \t\tassert_eq!(err_to_string(fs::write(\"test.mg\", b\"return 0\")), Ok(()));\n// \t\tlet result = err_to_string(parse_file(\"test.mg\"));\n// \t\tassert_eq!(err_to_string(fs::remove_file(\"test.mg\")), Ok(()));\n// \t\tassert_eq!(result, Ok(vec![Return(Const(0))]));\n// \t}\n// }\n\n\n\n// // // use inkwell::OptimizationLevel;\n// // // use inkwell::builder::Builder;\n// // use inkwell::context::Context;\n// // // use inkwell::execution_engine::{ExecutionEngine, JitFunction};\n// // // use inkwell::module::Module;\n// // use std::error::Error;\n\n// // // type SumFunc = unsafe extern \"C\" fn(u64, u64, u64) -> u64;\n\n// // // struct CodeGen<'ctx> {\n// // // \tcontext: &'ctx Context,\n// // // \tmodule: Module<'ctx>,\n// // // \tbuilder: Builder<'ctx>,\n// // // \t// execution_engine: ExecutionEngine<'ctx>,\n// // // }\n\n// // // impl<'ctx> CodeGen<'ctx> {\n// // // \tfn jit_compile_sum(&self) -> Option<JitFunction<SumFunc>> {\n// // // \t\tlet i32_type = self.context.i32_type();\n// // // \t\tlet fn_type = i32_type.fn_type(&[i32_type.into(), i32_type.into(), i32_type.into()], false);\n// // // \t\tlet function = self.module.add_function(\"sum\", fn_type, None);\n// // // \t\tlet basic_block = self.context.append_basic_block(function, \"entry\");\n\n// // // \t\tself.builder.position_at_end(basic_block);\n\n// // // \t\tlet x = function.get_nth_param(0)?.into_int_value();\n// // // \t\tlet y = function.get_nth_param(1)?.into_int_value();\n// // // \t\tlet z = function.get_nth_param(2)?.into_int_value();\n\n// // // \t\tlet sum = self.builder.build_int_add(x, y, \"sum\");\n// // // \t\tlet sum = self.builder.build_int_add(sum, z, \"sum\");\n\n// // // \t\tself.builder.build_return(Some(&sum));\n\n// // // \t\tunsafe { self.execution_engine.get_function(\"sum\").ok() }\n// // // \t}\n// // // }\n\n\n// // fn main() -> Result<(), Box<dyn Error>> {\n// // \tlet context = Context::create();\n// // \tlet module = context.create_module(\"lab\");\n// // \tlet builder = context.create_builder();\n// // \t// let execution_engine = module.create_jit_execution_engine(OptimizationLevel::None)?;\n// // \t// let codegen = CodeGen {\n// // \t// \tcontext: &context,\n// // \t// \tmodule,\n// // \t// \tbuilder: context.create_builder(),\n// // \t// \t// execution_engine,\n// // \t// };\n\n// // \tlet i32_type = context.i32_type();\n// // \tlet fn_type = i32_type.fn_type(&[], false);\n// // \tlet function = module.add_function(\"main\", fn_type, None);\n// // \tlet basic_block = context.append_basic_block(function, \"doit\");\n\n// // \tbuilder.position_at_end(basic_block);\n// // \tlet sum = builder.build_int_add(i32_type.const_int(2, false), i32_type.const_int(4, false), \"sum\");\n// // \tlet sum = builder.build_int_add(sum, sum, \"sum\");\n// // \tbuilder.build_return(Some(&sum));\n// // \tmodule.write_bitcode_to_path(&std::path::Path::new(\"lab.bc\"));\n\n// // \t// let sum = codegen.jit_compile_sum().ok_or(\"Unable to JIT compile `sum`\")?;\n\n// // \t// let x = 1u64;\n// // \t// let y = 2u64;\n// // \t// let z = 3u64;\n\n// // \t// unsafe {\n// // \t// \tprintln!(\"{} + {} + {} = {}\", x, y, z, sum.call(x, y, z));\n// // \t// \tassert_eq!(sum.call(x, y, z), x + y + z);\n// // \t// }\n\n// // \tOk(())\n// // }\n"
  },
  {
    "path": "posts/approachable-language-design.md",
    "content": "we ought to only use nonword symbols to only indicate concepts of the *language*, things that can't be represented *within* the language, ones that apply to all types equally. that way we can get the broadest use from them\n\nfor example, using `..` or `...` for exclusive and inclusive range operators is embedding a *trait-level concept* (ability for some type to generate a range from a beginning to an end, or to perform an operation from one type to another) directly into the language syntax (better to just create a trait functions like `1 :upuntil 5` or `1 :upthrough 5` that return iterators). In contrast, using the same operators to spread an indexed or a named object or especially an indexed or named *type* into another is more general. it's kind of an operator on *kinds* and is shorthand for a *metaprogramming* construct over all types rather than a programming construct over concrete types.\n\n\n\n\n\nalways prefer to pass data explicitly and only proofs and types (which are really just types) implicitly\n\n\n\nthe big problem with the convention of global constructors for datatype fields is that the names end up having to be mangled since they aren't scoped and could collide! it's terrible design to easily allow these unscoped operators/constructors\n\n\n\nOne of the big ideas to create an approachable language is to really carefully design it to separate *hard prerequisite knowledge* from *defined knowledge*. Basically hard prerequisites are all the things someone absolutely must know in order to read or write the language, for example the syntax, primitive keywords and constructs, fundamental concepts, etc. These things can be much more terse and cryptic *because the learner has a known and prerequisite learning path to understand them* in the form of the basic language documentation. We know they'll be encountering them constantly, and we expect them to know them because we've provided such a paved path to do so, so it's okay if they're terse and efficient. The same doesn't go for things that can be defined *within* the language! Some random library shouldn't have the same power as the main language to introduce syntax or constructs, since we don't have any idea if someone will be able to track down that random library as the source of those constructs. The wall between what is primitively part of the language and what has been defined in user-land within the language should be crystal clear at all times.\n\nAnother benefit of not allowing arbitrary custom symbolic syntax is that when a library author defines an operation, they are *forced* to come up with some intelligible name for it, which is a moment they can choose to use as an opportunity to make their code more clear. They of course can still choose a terrible name, but at least their poor choice will be more clearly exposed as a poor choice rather than being able to hide behind some \"plausible\" symbology.\n"
  },
  {
    "path": "posts/comparisons-with-other-projects.md",
    "content": "# Comparisons with other projects\n\nAn important question to ask of any project is: \"How is the project different than X?\" or more probingly: \"Why build this new project instead of just using X? Why can't X address your needs?\" This page attempts to thorougly answer that question.\n\nMany comparisons with other projects aren't actually that interesting, since the projects don't even have the same goals, or the comparison project isn't \"maxed out\" in one of Magmide's design pillars [(logical/computational/expressive power)](./posts/design-of-magmide.md).\n\nMany of these projects are essentially attempts to allow users to verify code in a fully automated way. Although full automation is nice, I ultimately don't think it's productive to hide the underlying logical ideas from users, instead of just putting in the work to explain them properly. If a tool allows manual proofs alongside metaprogramming capabilities then it can still have full automation in many domains, whereas if a tool can only prove a certain subset of claims automatically then it's forever limited to that subset.\n\n- Rust/LLVM: Not maxed out in logical power, can't prove correctness.\n- [Liquid Haskell](https://liquid.kosmikus.org/01-intro.html): Not maxed out in logical power since it only has refinement types and not a full type theory. Not maxed out in computational power since Haskell doesn't easily allow bare metal operations.\n- [Ivy](http://microsoft.github.io/ivy/language.html): Only a first order logic, so not maxed out in logical power. However the idea of separating pure functions and imperative procedures was part of the inspiration for the Logic/Host separation.\n- [TLA+](https://en.wikipedia.org/wiki/TLA%2B): Not based on dependent type theory, so not maxed out in logical power. Not maxed out in computational power, since the language itself is only intended for specification writing rather than combined code/proofs.\n- Isabelle/HOL, ACL2, PVS, Twelf: Not maxed out in logical power, [missing either dependent types, higher order logic, or general `Prop` types](http://adam.chlipala.net/cpdt/html/Cpdt.Intro.html).\n- [Dafny](https://dafny-lang.github.io/dafny/): Not maxed out in computational power, since it only exposes a fairly high level imperative language. It seems like they've tried too hard to create an \"easy mode\" tool.\n- [Rodin tool/B-method](https://en.wikipedia.org/wiki/Rodin_tool): Only seems to be first order, so not maxed out in logical power. Also doesn't seem to use a bare metal language and separation logic to reason about real programs, which isn't surprising since separation logic was only recognizably invented in around 2002.\n- [Rudra](https://github.com/sslab-gatech/Rudra): Intended to only uncover common undefined behaviors, rather than to prove arbitrary properties.\n- [Prusti](https://github.com/viperproject/prusti-dev): Intended to only automatically check for absence of panics or overflows, or pre/post conditions rather than arbitrary properties.\n- [RustHorn](https://github.com/hopv/rust-horn): Intended to only automatically check pre/post conditions rather than arbitrary properties.\n- [KLEE](https://llvm.org/pubs/2008-12-OSDI-KLEE.html) and related tools: Intended to only generate reasonably high coverage tests rather than prove arbitrary properties.\n- [Property-based testing tools](https://www.lpalmieri.com/posts/an-introduction-to-property-based-testing-in-rust/): Intended to only test a random subset of values rather than all possible values.\n\n---\n\nThen there are many academic projects which verify software at the same deep level as Magmide intends to, but don't have the intent to create a tool that can act as both the logical and computational foundation for all software. These research projects will be very useful to learn from, but again their goals aren't as directly focused on broad engineering practice as Magmide.\n\n- [Lambda Rust/RustBelt](https://www.ralfj.de/research/phd/thesis-screen.pdf): A formalization of a realistic subset of Rust, proofs of its soundness, and proofs of the soundness of many core Rust libraries that use `unsafe`. This project is what spurred the development of the Iris separation logic that will be extensively used in Magmide. RustBelt is obviously the direct ancestor of the Magmide project, and laid the formal foundations for it to be possible. However it only intended to verify Rust code \"on the side\", rather than creating a tool capable of *implementing* a verified version of Rust. I hope to deeply collaborate with the RustBelt authors!\n- [Vellvm](https://github.com/vellvm/vellvm): A formalization of LLVM in Coq. Doesn't intend to use this formalization to create a self-hosting/verifying proof assitant. Importantly, doesn't use a higher order separation logic such as Iris, so it likely can't be used directly in Magmide. However the project itself and its creators will be invaluable sources of knowledge.\n- [Vale](https://www.microsoft.com/en-us/research/wp-content/uploads/2017/08/Vale.pdf): A Dafny tool capable of verifying the correctness of annotated assembly language cryptographic routines. This project is extremely cool and similar to Magmide in the sense that it is capable of verifying arbitrary conditions of bare metal code. However, it is very narrowly focused on cryptographic applications, and has no intention of implementing a general purpose language. However the success of the project (and inspired work such as [this more efficient F* verification condition generator](https://www.andrew.cmu.edu/user/bparno/papers/vale-popl.pdf)) shows that something like Magmide is possible.\n- [Bedrock](http://adam.chlipala.net/papers/BedrockICFP13/BedrockICFP13.pdf): A project that honestly feels very similar to Magmide! Bedrock is especially concerned with metaprogramming and verification of low-level code. However the project has been closed and the research group has been working on a [much smaller successor project `bedrock2`](https://github.com/mit-plv/bedrock2), along with [many other more academic projects](https://github.com/mit-plv/). It's very unclear to me what relationship these projects have with the private and proprietary [Bedrock Systems](https://bedrocksystems.com), other than both being directly related to [Adam Chlipala](http://adam.chlipala.net). I strongly believe it's absolutely essential these verification systems are open source, not only controlled by corporations and governments, and are shared as broadly as possible, so even if Bedrock Systems was filling the exact same need as Magmide there would be a need for an open source version. All the same, the original bedrock is yet another project that is promising for Magmide, since it shows that verified *macros* are possible and tractable. My (wildly conjecturing) guess about why the original project was discontinued is because Iris came about, which seems reasonable since just in 2020 the research group [had a guest post about Iris on their blog](https://plv.csail.mit.edu/blog/iris-intro.html#iris-intro). It probably didn't make sense to pursue their previous direction if they could learn/use Iris instead.\n- [DeepSpec](https://deepspec.org/main): A project verifying a whole family of extant systems end-to-end, which happens to include Vellvm. This again is very similar to the Magmide project, but isn't at all focused on creating tools suitable for mainstream engineers, or building a *new* foundational language. Although I think this research is extremely valuable, I don't think it's going to create a lot of industry excitement for verification.\n- [Metamath Zero](https://github.com/digama0/mm0): A project intended to create a minimal and extremely efficient language for specifications and proofs. This project is very focused on simplicity of the proof language and the speed of the verifier, which aren't particular goals of the Magmide project. Magmide is more concerned with creating a foundational tool intended for mainstream use, so simplicity/speed of the verifier is desired but not essential. Instead of relying on a simple verifier implementation, Magmide is relying on Coq to bootstrap initial correctness, and speed of verification isn't a goal until after the project is bootstrapped [(\"make it work, make it right, make it fast\")](https://wiki.c2.com/?MakeItWorkMakeItRightMakeItFast). However I'm excited to learn lessons from mm0 both during and after Magmide's bootstrapping!\n- [ATS](http://www.ats-lang.org/): ATS is an extremely advanced and interesting language, which seems to already be capable of building very robust and performant code. It has lots of conceptual overlap with Magmide, such as integrating linear types reminiscent of separation-logic, compiling to a bare metal language (C), and providing special syntax for refinement types. However there are a few important differences:\n  - the design is extremely obtuse and the learning materials very academic (frankly I found it difficult even to navigate the docs enough to evaluate the merits of the design)\n  - the language seems to require all correctness assertions be somehow integrated into the type signatures of functions, whereas in Magmide the intent is to both allow assertions in types (using asserted types such as `Int & != 0`) as well as assertions in separate theorems, which should allow users to add more and more assertions without hopelessly cluttering the actual implementation\n  - the \"proof threading\" concept where proof objects are explicitly passed and returned is very painful, and Magmide intends to instead make all proofs either attached to data values through asserted types, or simply inferred based on which data values are passed (without requiring extra syntax to signal inference), or deferred to the proof section after the end of the function\n  - the linear type system isn't as powerful as Iris, which means all the special use cases Iris specifically worked to support are likely not supported\n\n  However my largest criticism of the project is its continued insistence on the pure functional paradigm. As I discuss at the end of this page, we shouldn't be trying to force the pure functional paradigm on our inherently imperative computational environments, but instead finding ways to ergonomically encode imperative concepts in the world of pure logic.\n\n  Ultimately ATS is very interesting, and I hope the creators can share their insights and ideas with Magmide as it matures.\n\n---\n\nNow to the really interesting comparisons, those with other higher order dependently typed proof assistants. I'll focus on Coq, Lean, and F*.\n\nBefore I go on, I just want to make sure something's crystal clear:\n\n**These three projects are amazing, and have obviously involved a terrifying amount of work from a large number of shockingly intelligent and hardworking people.** I'm very confident I could never have come up with the core ideas behind any of them, or implemented anything like them if I wasn't slavishly copying from them. **Nothing I say below is meant to disparage the projects!** I'm not an academic, but I've done just barely enough work to understand this academic field to apply my self-taught non-academic viewpoint. And from my viewpoint I can see the possibilility of a truly beautiful and unified language that finally gets formal verification into the mainstream. Please tell me if I'm crazy or am missing something really obvious! I'm just saying what I see.\n\nWith that out of the way....\n\nThose three projects could all be used to *logically define* Magmide, and since they all *technically* have the capability to produce running code, they could rival the intended use cases of Magmide. However none of them quite fit.\n\nAll three of them certainly feel very academic, and I'm not even sure exactly how hard the projects are *trying* to be approachable and achieve general adoption. I've already talked a lot about how academics do a pretty bad job explaining their work, often assuming shared knowledge rather than pointing readers toward prerequisites, using formal definitions and jargon instead of intuitive examples and metaphors, and not prioritizing ergonomic tools. But the *real* reason I think these languages haven't achieved general adoption is more nuanced:\n\n*I strongly believe all three of them are overly dogmatic about pure functional programming.* They mistakenly assume the functional programming language at their heart will *itself* be the thing people use to build software.\n\nFunctional programming may have its devotees, but there's a reason it's much less adopted than imperative methods: *real computers aren't pure or functional!* In a real computer, *every* action is impure and effectful, since even the most basic operations like updating registers or memory are intrinsically mutations of global state. The main idea of functional programming is a falsehood, one that makes some problems easier to reason about, but at the cost of ignoring the real nature of the problem. That extreme level of abstraction isn't always intuitive or helpful, and most engineers trying to build high performance systems that take advantage of the real machine will never be willing to make that sacrifice.\n\nThe Magmide design in contrast *splits up* Logic and Host into separate \"dual\" languages, each used in exactly the way it's most natural. Logic is the imaginary level where pure functions and mathematical algorithms and idealized models exist, which is the perfect realization of the goals of functional programming. Then those logical structures only exist at compile time to help reason about the messy and truly computational behavior of Host. Separation logic is what makes it possible to make robust safety and correctness assertions about imperative code, rather than simply outlawing mutation and side effects as is done in functional languages. And with a deeply powerful separation logic like Iris we can build things like trackable effects that are more practical, flexible, and ergonomic than other effect systems.\n\nThis again brings to mind the possible comparison between Rust and C: \"Why build Rust? Can't you do everything in C you could do in Rust?\" Well, yes you could! But... do you really want to? It isn't *only* about whether something's possible, it's about whether it's natural and clear and ergonomic. Why mix together the pure logical code and the real computational code when doing so doesn't make things easier and isn't really true? We don't want abstraction mismatches in our foundational language!\n\nSo in other words...\n\n![stop trying to make functional programming happen, it's not going to happen](https://blainehansen.me/stop-trying-to-make-fp-happen.jpg)\n\n... or at least, just use pure functional languages in contexts where their purity is actually correct.\n\nI obviously don't think functional languages should never be used to write programs. But we have to acknowledge the limitations of functional programming. With verified imperative foundations underneath us, it will be much easier to discover and implement whatever paradigms we find truly useful in whatever contexts they're useful, such as optimizations like [\"functional but in-place\"](https://www.microsoft.com/en-us/research/uploads/prod/2020/11/perceus-tr-v1.pdf).\n\nLet's dive into each of those three projects in detail:\n\n## Coq\n\nCoq has made a lot of frustrating design decisions.\n\n- Metaprogramming is [technically possible in Coq](https://github.com/MetaCoq/metacoq), but it was grafted on many years into the project, and it feels like it.\n- The language is extremely cluttered and obviously [\"designed by accretion\"](https://stackoverflow.com/questions/56517779/what-is-the-difference-between-lemma-and-theorem-in-coq).\n- All the documentation and introductory books were clearly written by academics who have no interest in helping people with deadlines build something concrete. Compare the [Coq standard library file for `Equivalence`](https://coq.inria.fr/library/Coq.Classes.Equivalence.html) with the somewhat related [`Eq` trait in Rust](https://doc.rust-lang.org/std/cmp/trait.Eq.html).\n- The [Notation system](https://coq.inria.fr/refman/user-extensions/syntax-extensions.html) just begs for unclear and profoundly confusing custom syntax, and is itself extremely overengineered.\n- Using the tool [can be quite punishing](https://softwarefoundations.cis.upenn.edu/lf-current/Induction.html#lab50).\n- It's a pure functional language with a garbage collector, so it will never perform as well as a self-hosted bare metal compiler.\n- And let's be honest, the name \"Coq\" is just terrible.\n\nAnother really important problem is that Coq can only produce runnable programs with the [extraction mechanism](https://softwarefoundations.cis.upenn.edu/lf-current/Extraction.html), which gives no guarantees about the extracted code doing the same thing as the original. Extraction [isn't itself verified](https://github.com/MetaCoq/metacoq/issues/163), so arbitrary bugs are possible during the process, and even if it were verified it would rely on the target language environment being correct. Although fully verified Magmide toolchains are very far away, the design is tailored specifically to that goal.\n\nCoq has existed *since 1989* and is still a very niche tool mostly only used by academics or former academics. Rust by comparison doesn't offer anywhere close to the correctness-proving power, has only been a mature language since 2015, but has achieved truly impressive adoption. The most damning accusation I can make against Coq is that it isn't even that broadly adopted *in academia*. Why aren't almost all papers in mathematics, logic, philosophy, economics, and computer science not verified in Coq? And yet approachable tools like python and julia and matlab are much more common?\n\nCoq is still powerful enough to be very useful though, which is why I've chosen it as Magmide's bootstrapping language. I'm working on [`posts/coq-for-engineers.md`](./posts/coq-for-engineers.md) to help get passionate contributors up the learning curve enough to be helpful, because I know I can't build this all myself, and I'm not sure how interested academics will be to help me :fearful:\n\n<!-- using myself as an example, I'm an extremely determined and curious person who has been hell-bent on understanding both it and the theory behind it, but since I'm not being led through it in an academic context where all the implicit knowledge is exposed through in-person mentors, it has been extremely challenging -->\n\nAnd of course I don't think it would be wise to just throw away all the awesome work done by the Coq project. At some point we could create a parser/converter to allow old Coq code to be read and used by Magmide.\n\n## Lean\n\nLean is very similar to Coq, and it seems its entire purpose is to be a more cleanly designed successor! However I'm somewhat frustrated that despite having an overall cleaner design, it isn't that substantially different. For example it still has a very [\"baked in\" custom notation metaprogramming system](https://leanprover.github.io/lean4/doc/syntax.html) rather than something more flexible.\n\nIt also makes the mistake of overemphasizing pure functional programming, and muddies the theorem proving language with a bunch of effectfulness concepts and builtin computational types. It also uses the interpreted pure functional language itself as the metaprogramming language, which will hamper performance.\n\nOverall it seems the project is more interested in the needs of academic mathematicians, or at least that seems to be the group who's actually been adopting it. I am excited to see what new things they come up with though!\n\n## F*\n\nF* is the project that's most frustratingly close to being capable of Magmide's goal, since it explictly supports extraction to various targets and is already being used in production-grade verification projects. But again, I don't think it's going to achieve general adoption, not just because of its academic tone, but because it also muddies the pure logical language with effectful computation concepts.\n\nEffectfulness is inherently an imperative computational concept. It seems very counterproductive to me to add primitive effects to a logical lambda calculus, since the whole point of a logical language is that it can be used to model any kind of effects for any kind of system. The logical language should be absolutely pure, because its job is just to do pure logic rather than computation.\n\nMost real functions will have *many* effects, and their authors only care in a small handful of circumstances, when those effects are unexpected/undesired. It's better to use automated checks/proofs to assert a function is *free from* certain effects rather than having to explicitly list effects.\n\n[SteelCore](https://www.fstar-lang.org/papers/steelcore/steelcore.pdf), the variant of separation logic used in various F* projects, also doesn't support the kind of recursive/impredicative ghost state and complex resource algebras that Iris does, or at least only supports them if they evolve monotonically.\n\n<!-- I'm also somewhat frustrated at how verbose the [Low*](https://fstarlang.github.io/lowstar/html/Introduction.html) is. If you're writing code that's supposed to model C, then it's obvious that we're using the C memory model and following the C calling convention. Needing to explicitly point that fact out with an effect type isn't productive. -->\n\n<!-- I'm also confused F* seems to be avoiding deep embeddings. Deep embeddings give us a huge amount of flexibility, since! Why build a proof assistant capable of targeting *some* environments, when a proof assistant has the power to target *all* environments?? -->\n\nAnd unforunately, F* is also a frustrating name. It isn't unavoidably clear to everyone how to say it (asterisk? splat? bullet?) and searching for the name online is an annoying dance of trying combinations like `f-star`, `f star`, or `\"f*\"`.\n\nThis is all very frustrating, because F* is *so close!* It's right on top of the right feature set, but the fact that it *hasn't* caught the attention of the engineering mainstream is likely the only evidence you need that it *won't*. Maybe it's just not done! Maybe it could flesh out and distill its documentation and add a few conveniences, but I just don't think that's going to be enough.\n\nThe F* community seems very interested in solving the tricky balance between automation and manual proofs though, and they've done a lot of cool work relating to specification inference and automated verification condition generation, so I'll be watching them very closely to see what other handy ideas they come up with!\n\n---\n\nSo there you go. Maybe my problems with Coq and Lean and F* all seem like minor gripes to you. Maybe you're right! But again, the intention of this project is to build a proof language *for engineers*. Academics have so many little cultural quirks and invisible assumptions, and I rarely come across an academic project that doesn't *feel* like one. Magmide asks the question \"what if we designed a proof language from scratch to take formal verification mainstream?\" No other project has done that.\n"
  },
  {
    "path": "posts/coq-for-engineers.md",
    "content": "You'll probably have to chew on these big ideas over time, so I've tried my best to make them short and easy to read through quickly. That way it should be easy to come back and reread them as you need to.\n\n\nFirst a few chapters on basic ideas I just want to make sure we're on the same page about:\n\n- basic type system ideas, basically a tour of the Rust type system\n- pure functional languages and lambda calculi, what they are and why they're good for proving things\n- boolean logic (propositional calculus), using coq functions as truth tables\n- predicate logic (first-order logic). here I just talk about the ideas we want to be able to encode (`forall`, `exists`, predicates), and don't relate them to coq.\n\nnow we're going to get into the meat of it, the stuff that's special about coq, so first I'll just introduce all the big ideas we're going to cover with short explanations so you know they're coming\n\n- dependent types, not really getting into how to use them yet, just showing that they exist.\n- type theory, just talking about the ideas, again not really bothering with anything concrete.\n- calculus of constructions, which is just a particular type theory that allows us to define inductive types. inductive types and forall-style function types are the only primitives in the type system. the computation rules do the rest and are pretty simple rules about unfolding function calls and stuff. this chapter is where we actually start doing real interesting coq examples. we only worry about `Type` level stuff and inductive types though. start with true/false, then implication, then how implication is the same as forall, then and/or\n- curry-howard, and talk of the prop universe and how propositions are types and proofs are values. also talking about interactive mode and how it works.\n- indexed types. basically allows you to make the type generic, but generic over *values* rather than types. this ability means that the constructor rules have to be more specific.\n\nhttps://coq.inria.fr/refman/language/core/conversion.html\n\nthat's the entire big idea section! with that we're ready to just get into the language\n\n- reflexivity\n- rewriting\n- exists proofs\n- automation\n- destruction proofs\n- proving negations\n- induction over data structures\n- induction over propositions\n- inversion\n- coinduction (very short, mostly just refer to outside sources)\n\nthen the more \"reference by example\" that will slowly be filled in\n\n\n\n\n\n# What is logic?\n\n*Logic is made up!* Logic isn't \"real\" in any strict sense of the word. In logic, a \"theory\" is just some completely arbitrary collection of *rules* for defining and manipulating imaginary structures, and academic logicians just study different kinds of theories and what they're capable of doing. Anyone in the world could make up any system of rules they wanted, and then call that system a \"theory\".\n\nHowever some sets of rules are more *useful* than others! The reason we bother with logic is because it helps us think through difficult problems we want to solve, so theories are more useful the more they actually \"line up\" with the real world. If we're able to come up with a set of clear and strict rules we can always follow to reliably make predictions about the real world and real problems, then that set of rules is helping us be more aligned with reality. It acts as a crutch we can use to deal with complex problems.\n\nIn order to really \"line up\" with the real world, a logical theory must be \"consistent\" by never telling us things that aren't true. For example if our theories of counting and arithmetic said that `2 + 2 = 5`, then they wouldn't be very useful, because in the real world when we grab two objects in one hand, another two in our other hand, and then put them together and count them, we always get `4` rather than `5`. If you wanted to you could define a number system that made `2 + 2 = 5`! It just wouldn't be very useful.\n\nOf course if you ask an academic logician they could talk your ear off about consistency and and proof theory and incompleteness and all the other meta-theoretical stuff academic logicians care about. I'm obviously simplifying things here, but that's because I don't really think it helps us much to spin our wheels thinking about these big questions when thinkers over the centuries have already given us excellent practical answers. When you write computer programs you don't bother to worry about how the electrons are moving around in your machine's transistors or what subatomic fields are allowing them to do so, that's all just abstracted. Let's do the same thing here.\n\n\n<!-- As I've been reading about logic and math while trying to understand formal verification, I've been frustrated at how fuzzy and vague a lot of it is. I think mathematicians have a bad habit of asking ultimately unproductive questions just because they're interesting, and letting themselves get [nerd-sniped]() trying to figure out absolutely everything.\n\nIn practical engineering, we're always looking for abstractions, simplifications we can use to ignore details that don't matter for the problem we're trying to solve right now. In my opinion, these mind-bending questions about what logic \"really is\" or where it comes from aren't that useful, and we'll get stuck spinning on them forever if we let ourselves. So I just want to quickly get these questions out of the way so we don't have to talk about them anymore while still making sure you know you *can* if you want to.\n\nSo, the answer to the question, \"what is logic?\"\n\nFor our purposes, we're just going to put \"logic\" and \"consistency\" beneath abstractions and say \"we don't care!\" We don't need to deeply understand that stuff to make progress. But it's helpful to know exactly where that abstraction level is! Instead of looking at that layer like it's mysterious, we're just postponing looking at it for now. And honestly, this area is kind of a mess. It seems just about every professor of logic has written a bunch of things about what logic and type theory and such are, and mostly everyone just says the same few things over and over again but using different words. Again, don't worry about it too much. If you want to dive down the rabbit hole, here are some resources you might enjoy.\n\nWhen going over different logical ideas, it might be tempting to ask questions like \"why should I trust this system of rules?\" \"what makes this system of rules better than this other system of rules?\" \"what makes this system of rules trustworthy or useful?\"\n\njavascript -> browser -> c/rust -> assembly -> hardware -> physics\nmagmide -> calculus of constructions -> type theory -> logic -> proof theory -> forever!!!\n\nTo prove we can make up our own rules, let's make our own theory of numbers! We'll define a system of rules, with exactly these rules:\n\n- There is a constant `zero` (remember, we can choose any name we want! the meaning of this constant completely depends on what we arbitrarily decide we're allowed to do with it!)\n- There is a constant `next`. `next` is like a function that can \"wrap\" either `zero` or any existing wrapping of `next`s. So we can write these \"wrappings\" like this: `next(zero)` or `next(next(zero))` etc. (remember, we could choose any \"syntax\" we want, such as `zero.next` or `(next zero)`).\n\nWith these two basic rules we can count! `0` is `zero`, `1` is `next(zero)`, etc. But without any more rules we can't do much else, for example we can't create an `add` function that can put together two of these \"numbers\". But we could choose to add many different new rules in order to make our theory of numbers \"strong\" enough to represent the idea of addition.\n\nyou'll notice that we've defined an *inconsistent* theory here! This is one way we could try to understand what it really *means* for a theory to be inconsistent, *that we can't actually follow all the rules all the time*, or that if we try to follow all the rules we won't know what to actually do in some situations. Basically, a theory is inconsistent if it doesn't have enough rules for us to always know what to do in every situation. we can make the above theory consistent by either removing things (remove `red` and all rules that explictly mention it) or adding things (just arbitrarily decide that `green` `yo` `red` is `green`). you'll notice that an inconsistent theory is just one where there isn't enough detail in the theory to always follow the rules.\n\nhttps://plato.stanford.edu/entries/russell-paradox/\n\na paradox is just a situation that *seems* to be allowed by a theory but actually isn't specified enough to know what to do, since it implies questions we can't answer\n-->\n\n\n# What are propositional/predicate/first order/higher order logic?\n\nAcademic logicians have categorized different logical systems by how \"powerful\" they are. Basically\nhttps://en.wikipedia.org/wiki/First-order_logic\n\n- Propositional: basic ideas about true/false, and different truth table operations on those values.\n- Predicate/first-order logic: creating functions\n\n# What are dependent types?\n\nThink of a function that is supposed to take an integer and return a boolean representing whether or not that integer is equal to `1`. In Rust we might write that function like this:\n\n```rust\nfn is_one(n: usize) -> bool {\n  n == 1\n}\n```\n\nThe type of that function is `(usize) -> bool`, representing the fact that the function takes a single `usize` argument and returns a `bool`.\n\nBut notice that the *type* `(usize) -> bool` can apply to *many* different functions, all of which do different things:\n\n```rust\nfn is_one(n: usize) -> bool {\n  n == 1\n}\nfn is_zero(n: usize) -> bool {\n  n == 0\n}\nfn always_true(_: usize) -> bool {\n  true\n}\nfn always_false(_: usize) -> bool {\n  false\n}\n// ... many other possible functions\n```\n\nWhat if we want our type system to be able to *guarantee* that the boolean returned by our function did in fact perfectly correspond to the integer passed in being equal to `1`? Dependent types allow us to do that, because they are *types* that can reference *values*. Here's a Coq function that returns what's called a \"subset type\" doing exactly what we want (don't worry about understanding this code right now):\n\n```v\nFrom stdpp Require Import tactics.\n\nProgram Definition is_one n: {b | b = true <-> n = 1} :=\n  match n with\n  | 1 => true\n  | _ => false\n  end\n.\nSolve All Obligations with naive_solver.\n```\n\nIn Coq's dependent type system, any *value* used earlier in a type can be referenced in any type afterward. For example the type of `is_one` uses the `forall` keyword to introduce the name `n` that is referenced in the return type: `forall (n: nat), {b: bool | b = true <-> n = 1}`. Again, don't worry about the details yet, we'll go over that in detail later.\n\n<!-- Coq makes this a bit more difficult than we might prefer, but with a helper tactic from the [`stdpp` library](https://gitlab.mpi-sws.org/iris/stdpp) we don't have to think about -->\n\n# What is induction?\n\n# What is Type Theory?\n\nTo understand type theory, it is actually helpful to understand what came *before* type theory: set theory.\n\nType theory is just a way of defining what kinds of values can exist, and what operations can be performed on those values. That's really it! \"Type theory\" is actually a big umbrella term that contains a few *specific* type theories, but they all share one basic idea: every *term* in the logic has such as variables or functions has some *type* defining what operations are allowed for that term. also type theories define \"rewriting rules\", basically computation rules, about how some term can be \"computed\" or \"evaluated\" or \"reduced\" to transform it into a different term. Usually these rewriting rules are designed so that the terms only get \"simpler\", or get closer and closer to being pure values that can't be reduced any further. Academics call these irreducible terms \"normal forms\".\n\ntypes vs terms vs values\n\nhttps://en.wikipedia.org/wiki/Type_theory#Basic_concepts\n\nhttps://en.wikipedia.org/wiki/Inductive_type\n\n\n\n# What are pure functional languages?\n\n# What is the Lambda Calculus?\n\n# What is the Curry-Howard Correspondence?\n\nhttps://en.wikipedia.org/wiki/Intuitionistic_type_theory\n\nTypes can be seen as propositions and terms as proofs. In this way of reading a typing, a function type {\\displaystyle \\alpha \\rightarrow \\beta }\\alpha \\rightarrow \\beta  is viewed as an implication, i.e. as the proposition, that {\\displaystyle \\beta }\\beta  follows from {\\displaystyle \\alpha }\\alpha .\nhttps://en.wikipedia.org/wiki/Curry%E2%80%93Howard_correspondence\n\n# What is the Calculus of Constructions?\n\nhttps://en.wikipedia.org/wiki/Calculus_of_constructions\n\n# What are Hoare Triples?\n\nHoare Triples were invented by [Tony Hoare](), and are a very simple method for making logical assertions about the behavior of programs.\n\nEssentially, a \"triple\" is:\n\n- A piece of a program, or a *term* that we're making assertions about. Depending on the kind of language, it could be a function, a series of statements, an expression, etc.\n- A *precondition* that we assume to be true before the term is evaluated.\n- A *postcondition* that we claim will be true after the term is evaluated.\n\nWe \"assert\" a triple by proving that if the precondition is true before you evaluate the term then the postcondition will always be true. This makes the precondition a *requirement* for anyone evaluating the term if they want the postcondition.\n\nThe pre/post conditions are usually written in double brackets (`{{ some assertion }}`) and put before and after the term. Here's a really basic example:\n\n```\n{{ n = 0 }}\nwhile (n < 5) {\n  n = n + 1\n}\n{{ n = 5 }}\n```\n\nWe can write triples that aren't true, we just won't be able to prove them!\n\n```\n{{ n = 0 }}\nwhile (n < 5) {\n  n = n + 1\n}\n{{ n = 6 }} // wrong!\n```\n\nThere's lots of theory about properties of Hoare triples, but they get really interesting when combined with Separation Logic.\n\n# What is Separation Logic?\n\nWhen we're writing a real computer program, all the values we pass around \"live\" somehwere in the machine, either a cpu register or memory. When we pass values to other parts of the program, we need to somehow tell that other part where our values are, either by copying them to some different place or just giving a reference to where the value is stored. This means we don't really \"give\" the value, we just put it somewhere we know the other part will be looking for it.\n\nWhen we're writing a real computer program, all the values being passed around actually \"live\" somewhere in the machine, either in a register or memory. Let's make up\n\n```\n```\n\n\nThis means they could be \"destroyed\" by writing different values into their spot. This is a very important quality of a program that has to be respected in order to get the program right, since carelessly writing values to different spots in the machine could destroy values other parts of the program are relying on.\n\nFor a long time formal verification theories didn't do a good job of acknowledging this, which is why they typically only worked with pure functional systems where no value could be mutated. Doing so made it easy to pretend the real computational values were actually purely imaginary logical values, but this made it impractical to prove things about real high performance programs.\n\nSeparation logic was invented as a solution to this problem. It's a system that makes it possible to encode \"destructibility\" or \"ownership\" into values, so they can finally reason about real locations in a machine.\n\nLet's go through how it works. If we have some assertion in a normal logic, such as `A & B`, we're allowed to \"duplicate\" parts of our assertion as long as doing so doesn't change its meaning. For example, if `A & B` is true, then so is `A & (A & B)` or `(A & B) & (A & B)`. However you can probably guess that this kind of duplication isn't actually consistent if the assertions are talking about values that live in some spot of the machine.\n\nSeparation logic introduces a concept called the \"separating conjunction\", that basically claims *ownership* of some assertion, and requires us to \"give up\" an assertion if we want to share it with someone else.\n\nSo we can work out examples, let's make up a notation to encode our assertions about memory locations: we'll decide that something like `[10] => 50` says that memory address `10` holds the value `50`.\n\nThe all-important separating conjunction is almost always written with the `*` symbol, and is usually read aloud as \"and separately\". So `[1] => 1 * [2] => 2` would be read as \"memory address `1` holds `1` *and separately* memory address `2` holds `2`\".\n\nHere's the really important part: the separating conjunction is *defined* so that it isn't allowed to combine multiple assertions about a single location under any circumstances. For example `[1] => 1 * [1] => 1` isn't allowed, even though the two assertions say the same thing!\n\nThis means that if we want to call a function and share knowledge of some memory location we've been writing to, we have to *give up* our assertion of that memory location while the function is working with it, and we can't just make a copy for ourselves. This is the killer feature of separation logic: it encodes the idea of destructible resources with some very simple rules.\n\n---\n\nacademic to engineer translation dictionary\n\n\n\n\n\n---\n\nBy the time we're done, you'll understand these ideas\nDependent types, which allow types to reference values and so represent any logical claim about any value\nCurry Howard equivalence, which discovered that a computer program can literally represent a logical proof\nType theory, the system that can act as a foundation for all logic and math, and is the thing that inspired programming in the first place\nCalculus of constructions, the particular kind of type theory used by coq\nProof objects, the essential realization that proofs are just data, because logical claims are just types\nSeparation logic, the variant of logic that deals with finite resources instead of duplicable assertions\n\nBasic type system ideas, they can skip if they know rust\nBoolean logic, they can skip if they know about truth tables and de morgans law\nType theory by way of set theory, talking about the in operator and quantification and implication\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nin my view, there are two major areas of learning you have to do to use Coq\n\n- The big picture theoretical ideas such as Type Theory, The Calculus of Constructions, The Curry-Howard Equivalence, and induction. Some of these are genuinely mind-bending and difficult to wrap your head around!\n- The actual *skill* of proving things, which is equal parts art and science.\n\nMost guides I've encountered discuss both in an intermingled style, and seem to order their examples based on how big or complicated they are versus how difficult they are to understand. Basically all of them encourage you to open the guide in an editor environment capable of stepping through the proof tactics so that you can see how the proofs work as you read along.\nIn my experience trying to learn Coq, I routinely got hung up on the *concepts*, and found it difficult to really understand the proofs even if I could step through them. And similarly as I've tried to actually use Coq I've found it annoying to have to dig through a whole textbook to just find some tactic explanation or syntax example.\n\nFor these reasons this guide goes a different direction.\n\n- The first part isn't really intended to be stepped through in an editor. It uses lots of examples, but it focuses on explaining the big ideas in a distilled and focused way. This part of the guide you can just read through and ponder. You'll likely have to sleep on many of the concepts to really get them.\n- The second part is more of a workshop in practical proving. It doesn't have any big ideas to share, but just talks about different proof strategies, goes through detailed examples, and goes over how many common tactics actually work. This is absolutely intended to be hands-on and done in an editor.\n- The third part is just a \"by example\" style reference, intended to be a resource you use while you're working on real projects. Coq is a *huge* language with tons of tactics and concepts, so the reference doesn't attempt to be truly complete, but we can work toward that goal together.\n\nThis guide is for you if you're attracted to the idea of writing provably correct software. Even if you don't have any particular projects in mind that you think could be verified, learning these ideas will massively level up your understanding of logic and computer science, and will completely change how you think about software.\n\n> If a language doesn't change the way you think about programming it isn't worth learning\nsomebody\n\n\n# basic type system concepts (which are secretly type theory ideas)\n\n# pure functional languages\n\nCoq is a pure functional language\nsome basic examples of functions\nnote how we're defining types with the `Inductive` keyword. don't worry about why `Inductive` makes sense for now, we'll talk about that a lot later. for reasons we'll talk about later we won't define any of\nwe only have to know how a type will be represented as bits if we actually *run* code that uses that type. if the code is merely *type checked* then the types can remain truly imaginary\n\n\nhaskell isn't *truly* pure and functional, it has little holes intentionally built into the language so that it can actually be used to run real computer programs. how it does that isn't relevant for us right now, so I won't go into it.\nbut Coq *really is* absolutely pure, which is why *on its own* it can't really be used for real computation. the language itself has no way of interacting with an operating system, or printing to consoles, or a way to be defined in terms of bits and machine instructions. the real purpose of the language is only really to be type checked and not run\nbut we can do these two things:\n\n- use a language interpreter to \"run\" the language. this is extremely slow and of course can't always happen if we're doing things with possibly infinite data or whatever\n- *extract* the language to a different one like ocaml or haskell. this is kinda gross in a way, since we're assuming that this process preserves all our correctness stuff and the target language can even do the things we want. but for many purposes it's perfectly acceptable.\n\n\nyou may be surprised to find out that Coq only has three truly primitive aspects:\n\n- ability to define inductive types\n- ability to pattern match on inductive types\n- ability to define functions\n\nthe type system only has two basic primitives:\n\n- functions\n- inductive types\n\nThat's it! Everything else, all operators and even the definition of equality (`=`) are all defined within the language.\n\n\n\n\nin logical languages when we define types we're defining something \"out of thin air\". we're defining arbitrary rules that we think will be useful because they line up with something in the real world. that's not the case for types in real programming languages, since those types are just abstractions around different ways of packaging and interpreting digital bits.\n\neven numbers aren't really *real*, and when we count \"things\" we're really just thinking about billions of atoms that happen to be close enough together and stuck to each other enough that we can pick up one part of the \"thing\" and have the rest of the \"thing\" come along for the ride. everything is just atoms all the way down, but the concept of numbers is a useful abstraction we can apply to our world when we're thinking about the world in terms of these \"things\", these chunks of stuff that are bound together enough to act as a unit.\n\n# basic boolean logic\n\n# predicate logic?\n\n# The Curry-Howard correspondence, how code can prove things\n\n# set theory, and why it was superseded by type theory\n\n# inductive types and proof objects\n\n# coding with interactive tactics\n\n# the difference between \"logical\" types and \"computational\" types\n\n\nhttps://www.youtube.com/watch?v=56IIrBZy9Rc&ab_channel=BroadInstitute\nhttps://www.youtube.com/watch?v=5e7UdWzITyQ&ab_channel=BroadInstitute\nhttps://www.youtube.com/watch?v=OlkYNDRo2YE&ab_channel=BroadInstitute\n\nhttps://x80.org/collacoq/\n\n\nhttps://github.com/jscert/jscert\n\n---\n\n\nIs propositional logic required to reason about consistency? What underpins propositional logic?\n\nI've been reading around about consistency and various paradoxes discovered in different logical systems (such as [Russell's paradox](https://plato.stanford.edu/entries/russell-paradox/)), and I'm being trying to figure out what it really *means* for something to be inconsistent. All the resources I can find on the topic just appeal back to basic propositional calculus by saying something about it being possible to derive some variant of `false` in the logic, but for some reason that's unsatisfying to me. If we need some ideas of true/false/contradiction in order to even do *metatheory*, then does that make the propositional calculus some kind of axiomatic bedrock for all of logic? Where's the \"bottom\" of our logical systems? Do we have one? Do different metatheorists simply disagree or use different systems?\n\nTo me it seems a true paradox like Russell's is worrying not because it's possible to derive a contradiction (is it possible to?), but because there are some situations *where we simply don't know what to do*. It seems these are merely problems of underspecification rather than inconsistency.\n\nFor example here's a dumb little logic that introduces an inconsistency, but only because we're specifying it in such informal language:\n\n- There are constants `green`, `blue`.\n- There is an operation `yo` which can take two of the above constants and output another one of the above constants.\n  - `yo(x, x)` outputs x (`yo(green, green) -> green`, `yo(blue, blue) -> blue`).\n  - If `green` is input to `yo`, the result must be `green`.\n  - If `blue` is the second argument to `yo`, the result must be `blue`.\n\nTo me it doesn't make sense to say this logic is inconsistent, it just seems \"poorly typed\". If we were forced to actually encode our `yo` operation as a fully explicit list of input pairs along with output, there would be duplicates or gaps in the list depending on how we chose to ignore the problems in the logic. In coq we can't even get such a logic to type-check if we don't arbitrarily resolve the inconsistency:\n\n```v\nInductive constant: Type := green | blue.\nDefinition yo c1 c2 :=\n  match c1, c2 with\n  (* these rules make sense *)\n  | green, green => green\n  | blue, blue => blue\n  | blue, green => green\n  (* here we have to just resolve the problems by choosing which rule \"wins\" *)\n  | green, blue => blue\n  end.\n\nNotation valid_yo f := (\n  (forall t, f t t = t)\n  /\\ (forall t, f green t = green)\n  /\\ (forall t, f t green = green)\n  /\\ (forall t, f t blue = blue)\n).\n\nTheorem valid_yo_impossible:\n  forall yo, valid_yo yo -> False.\nProof.\n  intros ? (_ & green_left & _ & blue_right).\n  specialize (green_left blue).\n  specialize (blue_right green).\n  rewrite green_left in blue_right.\n  discriminate.\nQed.\n```\n\nCould this potentially offer us an intuitive explanation of why strongly normalizing logics such as the calculus of constructions are consistent? Since their very structure demands operations to be fully explicit and complete and type-checkable it makes it impossible to even represent truly \"inconsistent\" terms? Is determinisic reduction the\n"
  },
  {
    "path": "posts/crossing-no-mans-land.md",
    "content": "# Crossing No Man's Land: figuring out Magmide's path to success\n\nThese are just the important points I want to make. Should I extend this to a full blog post? Is this too terse?\n\n- The real problem with existing proof languages like Coq *isn't* that they aren't powerful enough to be useful. Because of their core ideas of dependent types, proof objects, and resulting theoretical tools like separation logic, they're already extremely powerful and could be used *today* to build practical things.\n- However they're so poorly designed and explained and exposed to practitioners that they might as well not exist. The real problem is the culture and incentives of academia.\n- This is frustrating, since these projects have *technically* occupied large areas of use-case space, thereby making it much more difficult for a project like Magmide to quickly deliver incremental concrete value. Any small milestone Magmide could reach would only provide functionality technically already provided by other projects. In order to provide concrete verification value, we'd have to get detoured working on things that don't bring the language closer to a useful threshold of functionality.\n- Most concrete verification problems, such as those in smart contracts or cryptography or even memory safe programming, mostly need *reusable theory libraries* defining correctness/safety conditions and algorithms to check for them, as well as a tool capable of applying those conditions to real implementation code. The Magmide project has nothing to do with domain specific theory libraries, but instead seeks to create a completely general tool that makes it uniquely possible for such libraries to be created, shared, and applied. It seeks to create a foundational ecosystem, giving others the tools and education to solve their own problems by leveraging [the verification pyramid](https://github.com/magmide/magmide#do-you-really-think-all-engineers-are-going-to-write-proofs-for-all-their-code).\n- This is again frustrating, since those kinds of specific use cases are the only ones not already occupied by other projects. This makes me unsure what incremental project milestones Magmide could use to propel itself toward completion.\n- I'm becoming more and more convinced the correct short term path is to first pursue the \"Coq for programmers\" project, an online book that clearly explains the core ideas of Coq, guides users through the rough edges of installing and working with the tool, gives a practical crash course in theorem proving, explains methods to parse and prove assertions about the contents of external files, and develops a handful of small but interesting case studies such as formalizing a simple smart contract language and verifying programs in it.\n- The \"Coq for programmers\" project would test this hypothesis: if we create documentation/libraries/examples for existing proof languages, such that particularly correctness-conscious teams can use them for small but important applications, that will generate *just enough* interest for a broader audience to see the massive usability flaws as obvious and dire and worth solving. Then hopefully the energy and resources necessary to implement a design like Magmide's (or some other possibly better design!) will inevitably show up.\n- An online book project can iterate very quickly, since it can be shared for feedback at almost every stage of completion. We can learn a ton about how programmers think about correctness, as well as how difficult it is to teach and understand the core ideas and heuristic skills of a proof assistant. By finding sharp edges and bad explanations and workarounds for them, we gain a much more solid map of all the problems we need to solve in Magmide.\n- The \"Coq for programmers\" project would live inside the [Magmide github organization](https://github.com/magmide), and would routinely point to Magmide whenever it was explaining something it had to admit was obtuse or difficult. It would be a [fast static site](https://nuxtjs.org/announcements/going-full-static/), with niceties such as [quick search functionality](https://docsearch.algolia.com/) such as on [tailwindcss.com](https://tailwindcss.com/).\n- The essential components of the \"Coq for programmers\" project:\n  - Primer on prerequisite ideas: basic algebraic types, pure functional languages, boolean/predicate logic.\n  - Core ideas section discussing: dependent types, type theory, proof objects/curry howard correspondence, indexed types, typeclasses as proof objects.\n  - Course in interactive theorem proving discussing: rewriting, case analysis, induction, absurdity, automation, reflection, coinduction, setoid/morphism rewriting.\n  - Hoare logic and separation logic.\n- Optional \"nice to have\" components, depending on scope:\n  - Fast searchable tactic and tactic notation reference, heavily skewed toward examples rather than formal explanations like in the [Coq reference](https://coq.inria.fr/refman/coq-tacindex.html).\n  - Explanation of Iris.\n  - Explanation of core ideas of category theory.\n\n\n<!--\ncute opening with koan and hypothetical about rust having bad ergonomics\n\n\nAll of the things I could think of are either some sort of coq parser/integration enabling proofs of qualities of source code, or just verified compilers. Both those things are somewhat useful but are ultimately detours.\n\n\n\n\nAny project has to figure out how it's going to be successful, how it's going to power all the work that needs to happen to achieve its goals. For small projects that can be pushed to a useful point in \"20% time\" or by one person in their free time, this question is easy to answer: keep working until you have something to show.\n\nFor projects as massively ambitious as Magmide however, this question is extremely difficult to answer. I'm not capable of pushing this project to a useful point on my own in my free time, and in order for it to be successful I'll need help from a ton of very knowledgable and hardworking people. I've been talking with Juan Benet about this project, and he pointed me toward the [Tesla master plan](https://www.tesla.com/blog/secret-tesla-motors-master-plan-just-between-you-and-me) to use as inspiration. The Tesla master plan plotted several milestones for the car company to reach, with each intended to both be tractable given the resources available at that stage and bring in enough excitement and money to help the company reach the next milestone. [No matter what you may think about Tesla's overall impact on the world](TODO), the plan certainly seems to have worked. It's much easier to work toward small incremental goals that structurally support the pursuit of even larger goals than it is to jump toward a huge goal all at once.\n\nSo I've been thinking about what kinds of concretely useful milestones Magmide could reach that would create excitement and bring more contributors and support to the project, and I must admit I'm discouraged by what I've been finding. I'll summarize my thoughts, and then do more to support them.\n\nEssentially: all the aspects of Magmide are there to place the design at a uniquely [\"curve-bending\"](https://www.youtube.com/watch?v=2ajos-0OWts) point in the design space. The goal is to create a fully powered proof assistant that is pleasant to work with and easy to apply to concrete computational problems. I assert that combination of features will be past a tipping-point of power, one that unleashes not just differences of *scale* in software quality and ambition, but differences in *kind*, all as a result of the force multiplying nature of a highly reusable and sharable verification pyramid. In other words, I think Magmide's design will uniquely enable software projects that are essentially impossible without the presence of *all* Magmide's essential features (maxed out in logical capability, maxed out in computational capability, maxed out in metaprogrammatic capability). If you reduce the scope of even one of the essential features of the project, you've just recreated what's already available in other projects and not really achieved anything. In order to achieve *any* of the goal, you unfortunately have to achieve basically *all* of it.\n\n- If you don't have a fully powered dependent type proof checker, you lose massive swaths of functionality since you can no longer represent many different kinds of interesting and useful logical assertions, dramatically limiting the ability of the language to give valuable guarantees of correctness to many teams.\n- If you don't have the ability to write and compile bare metal imperative programs in a way that's fully integrated with your proof assistant, your proof checker is stuck on an island of computationally useless type theoretical purity, dramatically limiting the practicality of the language.\n- If you don't have the ability to write metaprograms in your integrated bare metal imperative language, the language can only support usage patterns that are explicitly supported by the compiler, dramatically limiting the expressivity and reusability of the language.\n\nWe can look at the space of languages plotted along three axes on a scale of 1-10: computational power (ability to express arbitrarily bare metal computation), logical power (ability to express arbitrarily complex logical assertions in the type system), and ergonomic usability (general \"lovedness\" of the design, tooling, documentation, teaching). These are my gut feeling ratings, and aren't at all objective.\n\n|       | Computation | Logic | Usability |\n|-------|-------------|-------|-----------|\n| Rust  | 10          | 5     | 10        |\n| C/C++ | 10          | 3     | 6         |\n| Coq   | 3           | 10    | 4         |\n| Lean  | 5           | 9     | 2         |\n| F*    | 8           | 9     | 3         |\n\nHere's a (sloppy and not at all accurate) graph:\n\n![3D plot of language axes](posts/crossing-no-mans-land-1.png)\n\nRust is of course not \"perfectly\" usable, the 10 score is given to reflect the project's core cultural commitment to usability, and the fact that *given the inherent complexity of the language* they've done a superb job building a usable tool.\n\nCoq is basically the C of logic, in that it's technically quite well supported, is the most powerful it could possibly be, has a (relatively) large ecosystem, and books that are acceptable but still not welcoming or distilled. But it has punishing tooling and cluttered syntax and often confusing semantics. It's still more usable than Lean or F*! A lot of work has gone into the interactive proof goal technology.\n\nThis is why Magmide is so insanely ambitious. It's essentially trying to (converge towards) scores of 10 on all three dimensions. Usability can converge slowly as long as both Computation and Logic are both maxed out, but if both Computation and Logic aren't maxed out then the project isn't usefully distinct from others.\n\nLooking at the first few bullets of the [project and bootstrapping plan](https://github.com/magmide/magmide/blob/main/posts/design-of-magmide.md#project-plan), it's clear to see that Magmide won't be distinctly useful\n\nIn order to achieve *any* potential that isn't achievable using other tools, Magmide has to first redo the core work already done by those tools and then use that foundation to surpass.\n\nSince the only unpicked spots in the value space are ones having to do with concrete verification problems that are actually usable for the engineers in those spaces, they are almost all ultimately detours on the path of actually implementing Magmide. The work of finding concretely useful projects that can excite and engage a larger verification audience is intrinsically a distraction from actually solving the real long-term problems of verification :(\n\nOne of the big values the project would add is just a new culture of verification that has escaped the suffocation of academia. Educational materials and nice tools are a big part of the usefulness\nThat's what we really have to do here. The problem is academic culture, and right now verification is trapped inside academic culture. We've already seen that engineering cultures are actually pretty good at (converging towards) approachability, practicality, usability, pragmatism (maybe I'm just thinking about Rust, but ultimately Rust both proves its possible and has actually been changing engineering culture as a whole it seems). Maybe that's the most important thing we can do, just help verification escape from the suffocation of academic culture and begin to grow in the richer earth of engineering culture.\nthe norms and teaching patterns and documentation expectations are awful in academia, which means that even though these incredibly powerful tools have been created, no one who would actually apply them to do concretely useful has any clue they even exist, let alone how to tractably use them\n\nThis is one of the most frustrating aspects of research debt! It seems very frequently that academics merrily walk along some research path, pluck all the low-hanging fruit of a domain, and then publish their work without even a second thought about how to expose that work to people with the means and intention to actually apply it in the world. This means that anyone coming along after them who wants to apply their work isn't seen as doing anything very valuable or interesting, but are just scraping up the dregs of the previous researcher. This makes it much more difficult for things to be applied at all!\n\nImagine a world where rust existed with the core powerful idea of ownership and lifetimes, but it was really clunky to use and all the tutorials and documentation and books were academic and disconnected. In that world anyone who wanted to build a better tool would have a much more difficult time, because anyone who wanted to use those core ideas technically already could\n\n\nverifying smart contract languages is the thing that seems most likely to generate energy and resources, but it's an especially distant detour! the whole reason for formalizing LLVM as the intermediate steps of Magmide is that LLVM can be used as both the target and implementation language for the Magmide compiler. Doing something equivalent to that task is unavoidable on the path of bootstrapping the compiler.\n\nMagmide isn't really intended to be the *final* product, the thing that people use to implement all the verified blockchains and compilers and languages in. It's intended to be the foundation for all those things, something that maximizes reusability so it can be a universal force multiplier for all software projects.\n\nso my hypothesis is that if we just create better documentation/libraries/examples for *existing* dependent type languages, then that will generate *just enough* interest for a much larger body of people to see their massive usability flaws as obvious and dire and worth solving, such that the energy and resources necessary to implement a design like Magmide's (or some other possibly better design!) inevitably show up.\n\nIt's *already technically possible* to use something like Coq to process and prove assertions about arbitrary code. It's just extremely painful and clunky! And much more importantly, very few who might actually be interested in doing so *are even aware it's possible to*. What would happen if we both told them it was possible and taught them how to do it?\n\nThis brings me to yet another \"comparison with X\", except this comparison is with existing teaching materials\n\n\nThe nice thing about first doing a project like Coq for programmers is that it can get extremely fast feedback, since it doesn't have to be \"done\" to be shared or useful. We can iterate very quickly and find out a lot about how existing engineers think about correctness, as well as how difficult it is to teach and understand the core ideas and heuristic skills of a proof assistant. By finding all the sharp edges and bad explanations in the existing system and finding workarounds for them, we will gain a much more solid map of all the problems we need to solve in Magmide.\n\n\n\n\n\nCoq for programmers, basically founding a bastion of people who want decent explanations and useful tools\n\nA binding tool that reuses the coq proof checker to make a better interactive system, isn't linear, has nice asserted types, could be used to write verifiers for other languages\n\na programming language becomes more useful or gains power as a function of its core features/primitives (representing what it's theoretically capable of doing), its tooling (representing how easy it is to actually use/access those core features), its ecosystem (representing how much reusable work has already been done), and its documentation and educational materials (representing how easy it is to learn how to do all of this). the foundation and most important aspect of a language is its core features, since if the primitives of the language can't possibly support something then none of the other aspects (tooling, ecosystem, documentation) can make up for that fact. only a fundamental change to the language itself can possibly bring about that support\n\nthis means that the three orbital features are mutually self-reinforcing and create a feedback loop. improving any of them will tend to make it more tractable and attractive to improve the others.\n\nfor a language like coq the core primitives that make it so powerful are dependent types and proof objects, and using those primitives one can implement a separation logic which is similarly powerful. its quite difficult to create a language with dependent types and proof objects, since the type/proof checking algorithm is very subtle and logically complex, and unfortunately the threshold is fairly \"all or nothing\". either you have a proof checker capable of correctly supporting those features or you don't.\n\na zen koan: if a language has powerful features but no one can understand how to use them, have they really been implemented?\n -->\n"
  },
  {
    "path": "posts/design-of-magmide.md",
    "content": "# Design of Magmide\n\nTo achieve the goals of the Magmide project, we have to arrive at a system with these essential components:\n\n- The Logic language, a dependently typed lambda calculus of constructions. This is where \"imaginary\" types are defined and proofs are conducted.\n- The Host language, an imperative language that actually runs on real machines.\n\nIf we have such a system, then *both* components (Logic and Host) can *formally reason about each other and themselves*, and can *run with bare-metal performance*.\n\n```\n         represents and\n           implements\n  +------------+------------+\n  |            |            |\n  |            |            |\n  v            |            |\nLogic          +---------> Host\n  |                         ^\n  |                         |\n  |                         |\n  +-------------------------+\n        logically defines\n          and verifies\n```\n\nThese two components have a symbiotic relationship with one another: Logic is used to define and make assertions about Host, and Host computationally represents and implements both Logic and Host itself.\n\nThe easiest way to understand this is to think of Logic as the type system of Host. Logic is \"imaginary\" and only exists at compile time, and constrains/defines the behavior of Host. Logic just happens to itself be a dependently typed functional programming language!\n\nThis architecture makes it possible to max out all the important aspects of the language:\n\n- **Max out logical power** by making the full power of dependent type theory available to all components at all stages. Without this the design wouldn't be able to handle lots of interesting/useful/necessary problems, and couldn't be adopted by many teams. It wouldn't be able to act as a true *foundation* for verified computing.\n- **Max out computational power** by self-hosting in a bare metal language. If the language were interpreted or garbage collected then it would always perform worse than is strictly possible.\n- **Max out expressive power** by allowing deep metaprogramming capability. Metaprogramming is basically a cheat code for language design, since it gives a language access to an infinite range of possible features without having to explicitly support them. It's the single best primitive to add in terms of implementation overhead versus expressiveness.\n\nFor a long time the goal of this project was to build a *new* Host language, something analogous to [LLVM](https://en.wikipedia.org/wiki/LLVM), so that all the formal reasoning could be made *foundational* (reaching all the way down to hardware), and so that extreme portability could be achieved. Those aims are absolutely still a part of the long-term vision of the project, but after discussions with [Juan Benet](https://www.linkedin.com/in/jbenetcs) and [Tej Chajed](https://www.chajed.io/) it was realized the project would be more realistic if it first sought to serve the needs of an existing language community. Rust is the obvious choice!\n\nWith this realization the new project roadmap has these essential milestones:\n\n### Build a proof assistant in Rust.\n\nA proof assistant is just a programming language with a type system powerful enough to represent pure logic, perhaps with some convenience features added on top to make it easier to practically use. This is what \"Magmide\" will actually be, a proof assistant written in Rust, designed from the beginning to be high-performing, highly modular and reusable from other projects, with support for and emphasis of Rust as the language of proof tactics and metaprogramming.\n\nSo Rust will be to Magmide what [OCaml is to Coq](https://github.com/coq/coq). This means that Magmide will be the \"Logic\" language in the above diagram.\n\n### Formalize Rust inside Magmide.\n\nA proof assistant is powerful enough to formally specify all the rules of any programming language less logically powerful than it, so just how [researchers have defined semantics for many languages in Coq](https://softwarefoundations.cis.upenn.edu/plf-current/Preface.html), so too could we define the semantics of Rust in Magmide.\n\nDoing this will *mostly* involve following in the lead of the various [RustBelt projects](https://plv.mpi-sws.org/rustbelt/) (and so would need to also translate Iris into Magmide). However it will be somewhat more difficult in our case, since we'll need to define the semantics more precisely in order to achieve the next milestone.\n\n### Allow Magmide to formally certify Rust!\n\nIf Magmide is implemented in Rust, and the formal semantics of Rust are transcribed in Magmide, then it's possible to ingest *real* Rust code and formally reason about it in Magmide!\n\n[Reflective proofs](http://adam.chlipala.net/cpdt/html/Reflection.html) are ones in which a *computable function* is used to certify some input data has some properties. To do this you need to be able to prove that certain properties of the *certifying function* demonstrate properties of the *input data*. This technique allows the proof assistant to merely run a function at proof-checking time rather than checking a possibly massive proof object.\n\nThis kind of recursive self-analysis will be extremely powerful, especially to [hopefully dramatically improve the performance of many proof search/checking scenarios](https://gmalecha.github.io/reflections/2017/speeding-up-proofs-with-computational-reflection) (more discussion about performance below).\n\nTo achieve recursive self-analysis we will:\n\n- Implement the \"Host reflection\" rule in Magmide. This means writing a special rule into the proof checker that accepts a proposition as proven if: given an *AST* of a Rust function and a normal Magmide proof that this AST is a \"certifier\", it compiles and successfully runs over whatever inputs are conditions of the proposition in question.\n- Implement the systems that can actually ingest real rust ASTs in whatever way is necessary for the Host reflection rule.\n- Design and implement whatever syntactic affordances make it clean and ergonomic to make proof assertions about real Rust. *Handwaving goes here!* This could happen in a multitude of different ways, and could be incrementally improved over time.\n- Figure out the [Trackable Effects](https://github.com/magmide/magmide#gradually-verifiable) system. This seems important to really make Rust fully formalizable, since effects are such a critical part of real system correctness.\n\n---\n\nAfter the above milestones are achieved, we will have officially achieved the base goals of the Magmide project! From there we'd be able to incrementally improve performance and usability, and also take on further challenges:\n\n- Circle back to extending the formal foundations all the way to the hardware! This would involve verifying LLVM or building some new LLVM analogue, as well as the specific [architecture backends](https://en.wikipedia.org/wiki/LLVM#Backends).\n- Build a formally verified Rust compiler! If we can formally verify Rust code, then we can incrementally verify the compiler itself.\n- Formally verify the Magmide proof checker using a trusted theory base, much like was done in the [metacoq project](https://metacoq.github.io/).\n\n# Interlude on proof assistant performance\n\nOne of the main reasons proof assistants and formal verification in general aren't mainstream is because existing proof assistants are *slow*. It isn't uncommon for large academic formalizations to take *many hours* to complete proof checking. This obviously can't scale to industrial codebases.\n\nThis performance problem is largely a consequence of lack of priority. Almost all proof assistants were designed and built by academics for their own purposes, and industrial use is largely considered a bonus. Good work is certainly done to improve performance, but few projects concern themselves with that from the beginning. There is some low-hanging fruit to be had here, such as using [incremental compilation](https://blog.rust-lang.org/2016/09/08/incremental.html). Overall though, mere compiler design probably isn't the biggest problem.\n\nIt seems pretty obvious that the *most* concerning problem is the poor performance of proof searching tactics. In languages such as Coq, proof tactics perform poorly for a few reasons:\n\n- The [Coq Ltac language is a separate interpreted language](https://coq.inria.fr/refman/proof-engine/ltac2.html#ltac2). Interpreted languages are slow! And such an ad-hoc language embedded into a larger system will never get the amount of performance attention it needs. Tactics don't actually have to be \"correct\" or \"sound\", they're just computations that *attempt* to find a proof term that seems like it will correctly proof-check to solve some goal. This means they can be written in any language we want.\n- Tactics often function by *searching* using various heuristics to find proof terms, and these searches by their very nature can sometimes take a long time!\n\nMagmide intends to attack these performance problems in these ways:\n\n- Using and emphasizing a high-performance language for proof tactics. Rust is an amazing language, and it seems like a perfect choice, especially since using Rust allows [binding to basically any other language](https://www.hobofan.com/rust-interop/). We could allow people to bring whatever tools they want to bear on proof search!\n- Proof *search* is often slower than proof *checking*. Once proof search has successfully found a correct proof term, it is wasteful and slow to run that search again instead of merely caching the proof term. Intelligently implemented incremental compilation can absolutely prevent many of these wasteful repeat proof searches.\n- Emphasizing [reflective proofs](http://adam.chlipala.net/cpdt/html/Reflection.html) for whatever situations allow it, as discussed above. I have a hunch many of the most common industrial use cases will be amenable to reflective proof. For example, just imagine if the Rust borrow checker was sufficiently verified to be a certifier!\n\nTaken all together these improvements make a convincing case for Magmide as a foundation for a fully verified software ecosystem, and the first proof assistant to go mainstream among industrial engineers.\n\n# Other notable design choices\n\n## Corruption Panics\n\nMagmide will formalizes some kind of `panic` effect that can be used to mark programs, making it possible for ambitious projects to prove that they *cannot* panic. However realistic low-level software must contend with the possibility of *hardware* failure that has created data corruption. It should be possible to write code that asserts the maintenance of invariants despite possible hardware failure.\n\nExcitingly the need to check possible hardware failure doesn't have to mean we must tolerate ubiquitous `panic` trackable effects on all our code. If we introduce a separate idea of a *corruption* panic, an effect requiring a proof that, *assuming consistency of the hardware axioms*, the `panic` is impossible, we can write highly defensive software without giving up proof of panic freedom under normal hardware operation.\n\n## Assumption Panics\n\nSimilarly to corruption panics, it should be possible to prove some panics will only occur if some *logical* assumption isn't true. This is different than corruption panics since those deal with *hardware* assumptions.\n\nThis is a reasonable thing to include because programs will sometimes want to take advantage of some conjectured theorem and aren't capable or don't have the resources to prove it true. If a program author is willing to risk the possibility of a panic if some conjecture isn't true then they should be able to do so, and have those panics signaled differently than other panics.\n\n## No `Set` type\n\n`Set` is just `Type{0}`, so I personally don't see a reason to bother with `Set`. It makes learning more complex, and in the situations where someone might demand their objects to live at the lowest universe level (I can't come up with any convincing places where this is truly necessary, please reach out if you can think of one), they can simply use some syntax equivalent of `Type{0}`.\n\n## Proof-irrelevant `Prop` type?\n\nI haven't had time to thoroughly read [these](https://tel.archives-ouvertes.fr/tel-03236271/document) [papers](https://dl.acm.org/doi/pdf/10.1145/3290316) about proof-irrelevant proposition universes and how their design is related to homotopy type theory. However from my early reading it seems as if `Prop` could simply be made proof-irrelevant along with some changes to the rules about pattern matching from `Prop` to `Type` universes, and the language would be more convenient and cleaner.\n\nPlease reach out if you have knowledge about this topic you'd like to share!\n\n## No distinction between inductive and coinductive types\n\nEvery coinductive type could be written as an inductive type and vice-versa, and the real difference between the two only appears in `fix` and `cofix` functions. Some types wouldn't actually be useful in one or other of the settings (a truly infinite stream can't possibly be finitely constructed and so would never be useful in a normal recursive function), but occasionally we might appreciate types that can be reasoned about in both ways.\n\nSo Magmide will only have one entrypoint for defining \"inductive\" types, and if a type could be compatible with use in either recursive or corecursive contexts then it can be used in either. It seems we could always infer whether a type is being used inductively or coinductively based on call context. If we can't, we should have a syntax that explicitly indicates corecursive use rather than splitting the type system.\n\nPlease reach out if you have knowledge about this topic you'd like to share!\n\n## Interactive tactics are just metaprogramming\n\nIn Coq the tactics system and `Ltac` are \"baked in\", so writing proofs in a different tactic language requires a plugin.\n\nIn Magmide the default tactic language will just be a metaprogrammatic entrypoint that's picked up automatically by the parser, so any user can create their own.\n\n```\n// `prf` (or whatever syntax we choose) is basically just a \"first-class\" macro\nthm four_is_even: even(4); prf;\n  + add_two; + add_two; + zero\n\n// you could write your own!\nthm four_is_even: even(4); my_prf$\n  ++crushit\n```\n\n## Incremental compilation as widely as possible\n\n[Incremental compilation](https://blog.rust-lang.org/2016/09/08/incremental.html) is a critical technique to ensure most compilation/type checking runs are reasonably fast. This is a very common technique in normal programming languages, but it [doesn't seem to have been implemented widely in proof assistants](https://proofassistants.stackexchange.com/questions/335/what-is-the-state-of-recompilation-avoidance-in-proof-assistants).\n\nIn proof assistants that heavily use automated tactics one of the most expensive parts of proof checking is actually running the automated tactics to discover proofs, since those tactics often have to walk a very large search space before they successfully find the right proof terms. Although bare metal tactics/metaprogramming and the computational reflection discussed in the above section will mitigate some of this cost, it still makes sense to avoid rerunning tactics or rechecking proofs if none of their dependencies have changed.\n\n## Builtin syntax for tuple-like and record-like types\n\nIn Coq all types are just inductive types, and those that only have one constructor are essentially equivalent to tuple or record types in other languages. This means that *all* data accesses have to ultimately desugar to `match` statements.\n\nThis cleanliness is fine and ought to remain that way in the kernel, but we don't have to make users deal with this distinction in their own code. Although Coq has somewhat supported these patterns with `Record` and primitive projections and other constructs, the implementation is cluttered and confusing.\n\nHere's a possible example of defining and using a few inductive types:\n\n```\n// nothing interesting here, just pointing out it can be done\ntype MyUnit\n\n// unions are roughly the same, again no interesting differences\ntype MyBool =\n  | True\n  | False\n\n// however for record-like types, there should only be as much syntactic difference with other types as is absolutely necessary\ntype Person =\n  name: string\n  age: nat\n// the only syntax allowed to construct a record-like type\nlet some_person = Person { name: \"Alice\", age: 12 }\nprint some_person.name\n// we could still allow explicit partial application with a \"hole\" operator\nlet unknown_youth = Person { age: 12, _ }\nlet known_youth = unknown_youth { name: \"Bob\" }\n\n// tuple-like types are similar\ntype Ip = (byte, byte, byte, byte)\n// only syntax allowed to construct tuple-like types\nlet some_ip = Ip(127, 0, 0, 1)\n// zero indexed field accessors\nprint some_ip.0\n// partial application\nlet unknown_ip = Ip(_, 0, 0, _)\nlet known_ip = unknown_ip(127, 1)\n```\n\n## Anonymous union types\n\nOften we find ourselves having to explicitly define boring \"wrapper\" union types that are only used in one place. It would be nice to have a syntax sugar for an anonymous union type that merely defines tuple-like variants holding the internal types. For example:\n\n```\ndef my_weird_function(arg: bool | nat | str): str;\n  match arg;\n    bool(b); if b; \"yes\" \\else; \"no\"\n    nat(n); format_binary(nat)\n    str(s); \"string = #{s}\"\n\n// values can be passed without being wrapped or converted?\nmy_weird_function(true)\nmy_weird_function(2)\nmy_weird_function(\"hello\")\n```\n\n## No implicit type coercion\n\nAlthough type coercions can be very convenient, they make code harder to read and understand for those who didn't write it.\n\nSimilarly to how Rust chose to make all type conversions explicit with casts or [the `From`/`To` traits](https://doc.rust-lang.org/std/convert/trait.From.html), Magmide would seek to do the same. This means Magmide will have a trait/typeclass system.\n\nWe can however choose to make these conversions less verbose, perhaps choosing a short name such as `to` for the conversion function, or supporting conversions directly with some symbolic syntax (`.>`?).\n\n## Inferred proof holes\n\nThe common case of writing verified functions is to write the `Type` level operations out explicitly (programmers are often quite comfortable with this kind of thinking), and then in a separate interactive proof block after the function body \"fill in the blanks\" for any implied `Prop` operations. In general it's more natural to separate data operations from proof operations, and Magmide will make this mode of operation the well-supported default.\n\nUsers can still choose to specify both `Type` and `Prop` operations explicitly. Or since `prf` is just a macro that constructs a term of some type, interactive tactics can be used to specify an entire term (as is possible in Coq), or *just a portion* of a term.\n\n```\ndef my_function(arg: input_type): output_type;\n  // I know I need to call this function with some known inputs...\n  arg.inner_function(\n    known_input, other_known_input,\n    // ... but what should this be again?\n    prf;\n      // some tactics...\n  )\n```\n\nSince often we need to help a type-checking algorithm along at some points, an `assert` keyword can be used to generate a proof obligation making sure some hypothesis type is actually available at some point in a function. This would basically be a `Prop` level type cast that must be justified in the proof block after the function.\n\n```\ndef my_function(arg: input_type): output_type;\n  let value1 = arg.function(known_value)\n  let value2 = arg.other(something)\n  // I know something should be true about these values...\n  assert SomeProp(value1, value2)\n  // ... which makes the rest of my function easier\n  some_function_requiring_SomeProp(value1, value2)\nprf;\n  // tactics proving SomeProp(value1, value2)\n```\n\n## Builtin \"asserted types\"\n\nSubset types are often a more natural way of thinking about data, and packaging assertions about data into the type of the data itself frees us from a few annoyances such as having to declare proof inputs as separate arguments to functions or at different levels of a definition.\n\nAlthough in a dependent type theory a subset type is absolutely a strictly different type than a normal constructed value, we can make life easier by providing syntax to define and quickly pull values in and out of subset types. I call these cheap representations of subset types \"asserted types\".\n\n```\n// using & is syntactically cheap\ntype MyByte = nat & < 256\n// multiple assertions\ntype EligibleVoter = Person & .age >= 18 & .alive\n// with parentheses if we want to be clearer\ntype EligibleVoter = Person & (.age >= 18) & .alive\n\n// using a list of predicates and a proof that all of them hold is more flexible than a single nested proposition\ntype AssertedType (T: Type) (assertions: list (T -> Prop)) =\n  forall (t: T), (t, ListForall assertions (|> assertion; assertion(t)))\n```\n\nWe can provide universal conversion implementations to and from types and asserted versions of themselves. Pulling a value out of an asserted type is easy. Putting a value into an asserted type or converting between two seemingly incompatible asserted types would just generate a proof obligation.\n\nThis same syntax makes sense to declare trait requirements on types as well:\n\n```\ndef my_function<T & Orderable>(t: T):\n  ...\n```\n\nAsserted types are simply a broader variant of [liquid types](https://goto.ucsd.edu/~rjhala/liquid/liquid_types.pdf), so it should be possible to infer annotations and invariants in many situations, as is done in [\"Flux: Liquid Types for Rust\"](https://arxiv.org/abs/2207.04034).\n\n## Cargo-like tooling\n\nThere's no reason to not make the tooling awesome!\n\n## Metaprogramming instead of custom Notation\n\nCoq's [Notation system](https://coq.inria.fr/refman/user-extensions/syntax-extensions.html) is extremely convoluted. It essentially allows creating arbitrary custom parsers within Coq. While this may seem like a good thing, it's a bad thing. Reasoning about these custom parsing and scoping rules is extremely difficult, and easy to get wrong. It adds a huge amount of work to maintain the system in Coq, and learn the rules for users.\n\nIt also makes it extremely easy to create custom symbolic notation that makes code much more difficult to learn and understand. Allowing custom symbolic notation is a bad design choice, since it blurs the line between the primitive notations defined by the language (which are reasonable to expect as prerequisite knowledge for all users) and custom notations. Although Coq makes it possible to query for notation definitions, this is again just more maintenance burden and complexity that still adds significant reading friction.\n\nMagmide's metaprogramming system won't allow unsignified custom symbolic notation, and will require all metaprogrammatic concepts to be syntactically scoped within known identifiers. Instead of defining an extremely complicated set of macro definition rules, metaprogramming in Magmide will give three very simple \"syntactic entrypoints\", and then just expose as much of the compiler query api as possible to allow for compile-time type introspection or other higher-level capabilities.\n\nMacros can either accept raw strings as input and parse them themselves or accept Magmide parsed token trees. This complete generality means that Magmide can support *any* parsing pattern for embedded languages. Someone could even define something just like Coq's notation system if they really want to, and their custom system would be cleanly cordoned off behind a clear `macro_name$` style signifier. By just leaning all the way into the power of metaprogramming, we can allow *any* feature without having to explicitly support it.\n\nTo actually use macros you can do so inline, as a block, or using a \"virtual\" import that processes an entire file.\n\n### Inline macros\n\nInspired by Rust's explicit `!` macros and javascript template literals.\n\nRaw string version:\n\n```\nmacro_name`inline raw string`\n```\n\nSyntax tree version:\n\n```\nmacro_name$(some >magmide (symbols args))\n```\n\n### Block macros\n\nUses explicit indentation to clearly indicate scope without requiring complex parsing rules.\n\nRaw string version uses a \"stripped indentation\" syntax inspired by [Scala multiline strings](https://docs.scala-lang.org/overviews/scala-book/two-notes-about-strings.html#multiline-strings), but using pure indentation instead of manual `|` characters.\n\n```\n// the |` syntax could be generally used to create multiline strings\n// with the base indentation whitespace automatically stripped\nlet some_string = |`\n  my random `string`\n    with what'''\n    ''' ever I want\n\n// placing the literal directly against a path expression\n// will call that expression as a raw string macro\nmacro_name|`\n  some\n    raw string\n  the base indentation\n  will be stripped\n```\n\nToken tree version is like \"custom keywords\", with an \"opening block\" that takes two token trees for header and body, and possible continuation blocks. Here's an example of a \"custom\" if-else block being used.\n\n```\n$my_if some.conditional statement;\n  the.body\n  >> of my thing\n\n/my_else; some_symbol()\n```\n\n### Import macros\n\nAllows entire files to be processed by a macro to fulfill an import command. you'll notice the syntax here is exactly the same as inline macros, but the language will detect their usage in an import statement and provide file contents and metadata automatically.\n\n```\nuse function_name from macro_name`./some/file/path.extension`\n```\n\n\n<!-- Now with a proof language, one can define types that model bits, binary arrays, register banks, memory banks, and therefore total machine states. Then one can define various predicates over these types, and model \"computable types\" by defining specific predicates. One can prove partial or total symmetries/modelings between binary arrays fulfilling certain predicates and other normal ideal types. one can define ideal types representing machine instructions, and parser/render functions that are provably inverses, and prove assertions about machine instructions and their effects on a machine state.\nthen you can write programs, and prove things about them.\n\ngoing between ideal and computable values\nif we have metaprogramming, then whenever you define an ideal type, you can access the computational representation of both the *type and any value fulfilling that type*. you can do whatever you want with this information, maybe by converting it into a value representing some other type or fulfilling some other type, possibly in a different layer of abstraction such as a computable one or props or whatever\n\ntypes constrain/define values\nvalues fulfill types\nvalues can computationally represent types\n\nso no type is a fixed point of itself, but a type *system* can be, if it's able to define itself.\n\n\ntype      type\n|          |\nv     |        v\nvalue-+       value\n\n\nlogic types constrain/define and can make assertions about logic values\nlogic values fulfill logic types\nlogic values can\n\nwhat's the difference between a bit array defined Logic Magmide but computationally represented in the smart packed format, and a real bit array? there's no difference at all, at least between a particular concrete one.\n -->\n<!--\nHowever there are some subtleties we have to contend with since Magmide is so inherently intended for verification of *real* computational programs.\nThe kernel has to be actually *implemented* in some real computational language, and we'd prefer it was a maximally performant one. Also, metaprogramming of all kinds, whether manipulating Logic terms or anything else, also has to be implemented in a real computational language. These might as well be the same language. This language needs to be one that can be run on *development* machines, the ones that will compile whatever final artifacts. Let's define the term Host to refer to this aspect.\n\nSo the final compiler will be a binary artifact runnable on some collection of host architectures. This artifact will have a few components:\n\nparser, capable of parsing Magmide, metaprogramming constructs, and any other constructs we choose to include in the shipped language syntax, all into Host data structures.\nproof checking kernel, which accepts some Host data structure representing Logic terms.\nmetaprogramming checker/runner. the compiler has builtin definitions and models of Host, so it can take AST structures representing Host and check them (Host likely includes syntax to make assertions about state, which are implicitly predicates over binary arrays), render/assemble them to some place in memory or a file, and jump to them to execute them (possibly having provided arguments by fulfilling whatever calling convention)\n\n\nthe magmide compiler is a program that can operate in any specific machine of a universe of machines that have been modeled at the time of the compiler being compiled. this universe of machines has been modeled with some kind of with input/output capabilities and probably some concepts of operating system services such a filesystem. so Host can include functions to write to files, and can expose functions for core concepts such as rendering compilation artifacts (probably accepting custom AST/assertions/checkers/renders etc)\n -->\n\n<!--\n- Magmide syntax rules only allow custom notation through the macro system, which ensures it is always scoped beneath a traceable and searchable name, making it much easier for new users to find explanations or definitions of custom notation.\n- Magmide syntax is whitespace sensitive and designed to make program structure and code formatting directly correspond.\n- Magmide syntax intentionally compresses different ways of expressing the same thing into the most general syntax choices, and requires the use of syntax sugars when they are available.\n- Magmide's import mechanisms usefully expose different kinds of definitions differently, allowing users to not ever need problematic implicit imports.\n- Magmide enables readable markdown documentation comments for definitions.\n- Magmide's builtin formatter warns about inconsistent naming and capitalization.\n- Magmide's core educational materials set a convention of approachability, traceability (calling out prerequisites), and clarity.\n-->\n"
  },
  {
    "path": "posts/intro-verification-logic-in-magmide.md",
    "content": "<!-- examples tell *how*, words explain *why* -->\n\nHello!\n\nIf you're reading this, you must be curious about how it could be possible to write truly *provably correct* programs, ones that you can have the same confidence in as proven theories of mathematics or logic. You likely want to learn how to write verified software yourself, and don't have time to wade through unnecessarily clunky academic jargon or stitch together knowledge scattered in dozens of obscure journal papers.\n\nYou're in the right place! Magmide has been designed from the ground up to be usable and respect your time, to enable you to gain this incredibly powerful and revolutionary skill.\n\nI hope you're excited! I powerfully believe verified software will bring in the next era of computing, one in which we don't have to settle for broken, insecure, or unpredictable software.\n\nHere's the road ahead:\n\nFirst we'll take a glimpse at some Magmide programs, both toys and more useful ones, just to get an idea of what's possible and how the language feels. We'll take a surface level tour of Compute Magmide, the procedural portion of the language we'll use to actually write programs.\n\nThen we'll dive into Logic Magmide, the pure and functional part that is used to make and prove logical claims:\n\n- We'll talk about why it's necessary to use a pure and functional language at all (I promise I'm not a clojure fanboy or something).\n- How to code in Logic Magmide, what it feels like to write pure functional algorithms and how it's different than normal programming.\n- A short overview of formal logic, and some comparisons to normal programming.\n- Type Theory, The Calculus of Constructions, and the Curry-Howard Correspondence, the big important ideas that make it possible for a programming language to represent proofs.\n- How to actually make and prove logical claims (it's getting good!), along with some helpful rules of thumb.\n\nNow with a working knowledge of how to use Logic Magmide, we can use it to verify our real Compute Magmide programs!\n\n- Separation Logic, the logical method that helps us reason about ownership, sharing, mutation, and destruction of finite computational resources.\n- Writing proofs about Compute Magmide functions and data structures.\n- Logical modeling, or proving some kind of alignment between a pure functional structure and a real computational one.\n- Testing as a gradual path to proofs, using randomized conjecture-based testing.\n- Trackable effects, the system that allows you to prove your program is free from unexpected side-effects such as memory unsafety, infinite loops, panics, and just about anything else.\n\nAnd then finally all the deeper features that make Magmide truly powerful:\n\n- Metaprogramming in Magmide, the capabilities that allow you to write your own compile-time logic.\n- Some basic programming language and type-system theory.\n- A short overview of basic computer architecture, including assembly language concepts.\n- The lower assembly language layers of Compute Magmide, and how to \"drop down\" into them.\n- The abstract machine system, and how Magmide can be used to write and prove programs for any computational environment.\n- A deeper look at Iris and Resource Algebras, the complex higher-order Separation Logic that makes Trackable Effects and lots of other things possible.\n\nThroughout all of these sections, we'll do our best to not only help you understand all these concepts, but introduce you to the way academics talk about them. We'll do so in a no-nonsense way, but we think it's a good idea to make sure you can jump into the original research if you want and not have to relearn all the \"formal\" names for concepts you already understand.\n\nLet's get to it!\n\n## Example Programs and a Tour of Compute Magmide\n\n<!--\n  - hello world\n  - the code and proofs for a verified implementation of something small, like a verified growable list or arbitrary size integer, probably including but hand-waving the purely logical model. this will use asserted types\n  - a Compute Magmide metaprogramming tease, something cool like the surface level use of a sql-like api to operate on raw data structures\n-->\n\n## Logic Magmide, How to Prove Things in a Programming Language\n\n### Why pure and functional?\n\nThere are quite a few pure and functional languages, such as [haskell]() and [clojure]() and [lisp]() and [racket]() and [elm](). What makes them different? Functional languages enforce two properties, with varying degress of strictness:\n\n- All data is immutable. There is no ability to mutate data structures, only create new structures based on them. Although most functional languages have some [cheating escape hatches]() for when it's *really* necessary to mutate something.\n- All functions are pure, meaning that if you pass the exact same inputs into them, you always receive the exact same inputs. This means you can't perform \"impure\" actions such as mutate a variable that wasn't passed into the function (remember, you can't mutate *anything*!), or create side effects such as reaching out to the surrounding system by doing things like file or network operations. Since programs that couldn't interface with the surrounding system *at all* would be completely worthless, functional languages have special runtime functions that allow you to interact with the system by passing them pure functions. But they all return some kind of \"side effect monad\", a concept we don't need to talk about here!\n\nNow, those two properties have some nice consequences. They mean that you can't accidentally change or destroy data some other part of the program was depending on, or get surprised about the complex ways different parts of your program interact with each other, or not realize some function was actually doing expensive network operations at a time you didn't expect.\n\nBut the especially important consequence of purity and immutability is that a program is *simple*, at least from a logical perspective. Every function always outputs predictable results based only on inputs, no complex and difficult to reason about webs of global mutable state are possible, the language operates as if it were simply math or logic, where everything has a precise definition that can be formally reasoned about.\n\nThere's just one big obvious problem: **all that purity and immutability is a lie!**\n\nWhen a computer is running, the *only thing* it's doing is mutating data. Your computer's memory and processor registers are all just mutable variables that are constantly being updated. Purity and immutability are useful abstractions, but they only go abstraction deep. Without mutation and impurity, computation can be nothing more than merely theoretical.\n\n**However,** this isn't actually a problem if we *are* just talking about something purely theoretical! We'll see in the coming sections how proof assistants like Magmide don't need to *run* programs to prove things, they just need to *type check* them, meaning it *doesn't matter* if the programs can't actually be run.\n\nThis means that a pure and functional language is the perfect fit for a proof assistant. All that matters for a proof is that we're able to express theoretical ideas in a way that's clear and precise and can be formally reasoned about. We don't have to care about performance or data representation or any of the details of real computation. Soon we'll even see that type theory, the logical framework powerful enough to form the foundations of all of mathematics and computer science, is itself basically just a pure functional language!\n\nIn a much later section we'll also discover that it *is* actually possible to prove things about imperative mutating code, and even that mutating code can be shown to perfectly correspond with purely theoretical code. This is one of the most important contributions of Magmide, that it integrates purely logical and realistically computable code and allows them to usefully interact.\n\nBut before all that, we have to build up some foundations.\n\n### Coding in Logic Magmide\n\nThe thing that makes Logic Magmide and other proof assistant languages special is *dependent types*, but we can't really understand those yet. First let's just go over the basic features of Logic Magmide, the features it shares with basically every other functional language like haskell.\n\nFirst, we'll define a datatype, a discriminated union (called an `enum` in Rust) that's shaped just like our old friend `boolean`. This type lives in the `Ideal` sort and so is purely theoretical.\n\nTODO\n\nPretty simple. Logic Magmide comes with a default boolean called `bool`, but we'll use our own for a second.\n\nNow let's define a function for `Boolean`. In normal imperative languages the body of a function is a series of *statements*, commands that mutate state as you go through the function. But in pure functional languages we can't mutate anything, so the body of a function is *only one expression*. Let's define the basic `negate`, `and`, `or`, and `xor` functions:\n\nTODO\n\nSome of these use `let` to give some expression a name that's used in subsequent lines, and the last expression is the final return value of the function. While this may seem at first to be a use of mutable state and against the rules, the way the evaluation rules of these languages are defined means these `let`s are technically a part of the final expression that are just evaluated first and replaced afterwards. Don't worry about it too much!\n\nBoth Logic and Compute Magmide have an awesome trait system, and the `if` operator in both uses a trait called `Testable` that relates a type back to `bool`. Let's make our `Boolean` testable by implementing this trait, and then use `if` to redefine the functions:\n\nTODO\n\nWe can also define types in the shape of a record (called `struct` in Rust):\n\nTODO\n\nOr tuples:\n\nTODO\n\nOr [\"unit\"](https://en.wikipedia.org/wiki/Unit_type) for types that can only have one possible value:\n\nTODO\n\nAnd of course the different *variants* (the academic term) of a discriminated union can be shaped like any of those types:\n\nTODO things like option and result and color and ipaddress etc\n\nWe can also create an *empty* type, a type that's impossible to actually construct!\n\nTODO\n\nThis is a discriminated union with *zero* variants, so if we try to choose some \"constructor\" to build this type, we can never actually find one and so will never be able to. You may wonder why we'd ever bother to define a type we can't actually construct, but I promise we'll discover a very powerful use for this type later.\n\nBefore we move on it's a good idea to just notice a few ways these different varieties of types relate to each other:\n\n- Tuples and records aren't really that different, since a record is just a tuple with convenient syntax sugar names we can use to refer to the fields. But any record or tuple type could be refactored into the other shape and the program would do the exact same thing.\n\n  TODO\n\n- The basic unit and record and tuple types are essentially also discriminated unions, they just only have one variant! Deep inside Magmide all types are actually represented that way, which is why an \"empty\" type is possible.\n\n  TODO\n\n- The `true` and `false` in `Boolean` are both just the unit type, but they're given distinct names and *defined* as being different from each other by the discriminated union they live in. The same is true for `None` in `Option` and the colors in `Rgb` and `Color`.\n\nNow we get to the thing that makes `Ideal` types special, their ability to simply represent recursive types:\n\nTODO nat\n\nThis type encodes natural numbers (or unsigned integers) in the weird recursive style of the [Peano axioms](https://en.wikipedia.org/wiki/Natural_number#Peano_axioms), where `0` is of course `zero`, `1` is `successor(zero)`, `2` is `successor(successor(zero))`, and so on. Remember, `successor` isn't a *function* that increments a value, it's a *type constructor* that *wraps* children values. Don't worry, you won't have to actually write them that way in practice, since the Magmide compiler will coerce normal numbers into `nat` when it makes sense to.\n\nYou may wonder why we'd represent them this way? Wouldn't this be incredibly inefficient? Whatever happened to bytes?\n\nAnd you'd be right! In a real program this way of encoding numbers would be an absolute disaster. But the Peano encoding is perfect for proving properties of numbers since the definition is so simple, precise, and doesn't depend on any other types. Our real programs will never use this idealized representation, but it's extremely useful when we're proving things about bits and arrays and a whole lot more. We'll see exactly how when we finally get to proofs, so for now let's not worry about it and just write some functions for these numbers:\n\nTODO nat operations add, subtract, multiply, remainder divide, is_even\n\nAnother extremely useful recursive type we'll use constantly is pure functional `List`, which is generic:\n\nTODO\n\nBasically every pure functional language uses the terms [`nil` and `cons`](https://en.wikipedia.org/wiki/Cons) when defining basic lists (`Cons` is short for \"*cons*tructing memory objects\"), so since they're so prevalent we've decided to stick with them here. `Nil` is just a \"nothing\" or empty list, and `Cons` pushes a value to the head of a child list, in basically the same way as a linked list. This means `[]` would be represented as `Nil`, `[1]` as `Cons{ item=1, rest=nil }`, `[1, 2]` as `Cons{ item=1, rest=Cons{ item=1, rest=Nil } }`, etc. Again you won't have to write them that way, the normal list syntax basically every language uses (`[1, 2, 3]`) will get coerced when it makes sense.\n\nJust like with `nat`, we won't almost ever actually represent lists this way in real programs, but this definition is perfect for proving things about any kind of ordered sequence of items.\n\nHere are a few functions for `list`:\n\nTODO\n\nNow that we're basically familiar with how to code in Logic Magmide, we can start understanding how to use it to prove things!\n\n### A Crash Course in Logic\n\nIf you ever took a discrete mathematics or formal logic class in school, you likely already know everything in this section. It isn't very complicated, but let's review quickly to make sure we're on the same page.\n\nA **proposition** is a claim that can be true or false (academics often use the symbols `⊤` and `⊥` for true and false). Some examples are:\n\n- `I am right-handed`\n- `It is nighttime`\n- `I have three cookies`\n\nWe can assign these claims to a variable, to make it shorter to refer to them (`:=` means \"is defined as\"):\n\n- `P := I am right-handed`\n- `Q := It is nighttime`\n- `R := I have three cookies`\n\nThen we can combine these variables together into bigger propositions using [logical connectives](https://en.wikipedia.org/wiki/Logical_connective):\n\n- The `not` rule reverses or \"negates\" (the academic term) the truth value of a proposition. It's usually written with the symbol `¬`, but we'll use `~`. So if `A := ~P` then `A` is true when `P` isn't true.\n- The `and` rule requires two variables to both be true for the whole \"conjunction\" (the academic term) to be true. It's usually written with the `∧` symbol (think of it as being \"strict\" or \"closed\", since `and` is more \"demanding\" and points downward), but we'll use `&`. So if `A := P & Q` then `A` is true only when `P` and `Q` are both true.\n- The `or` rule requires only one of two variables to be true for the whole \"disjunction\" (the academic term) to be true. It's usually written with the `∨` symbol (think of it as a \"loose\" or \"open\" link, since `or` is less \"demanding\" and points upward), but we'll use `|`. So if `A := P | Q` then `A` is true when either `P` or `Q` are true.\n\nAll these connectives can have a \"truth table\" written for them, which tells us if the overall expression is true based on the truth of the sub expressions.\n\nTODO truth tables for not, and, or\n\nThe `implication` rule is especially important in the type theory we'll get into in later chapters. It's usually written with the `→` symbol or just `->`, and it's easy to think why it's shaped like an arrow: the truth value of the left variable \"points to\" the truth value of the right variable. So for example, if `A := P -> Q`, then `A` is true if whenever `P` is true `Q` is also true. It's also not an accident that `->` represents both implication and the type of functions (`str -> bool`), but we'll get to that later.\n\nTODO truth table\n\nVery importantly, notice how an implication is only false in one situation, when the left variable is true and the right is false. This means that if the left variable is *false* then the implication is always true, or if the right variable is *true* then the implication is always true. Basically you can think of implications like an assumption and some conclusion that should always follow from that conclusion: if the assumption isn't true, then it doesn't matter what the conclusion is!\n\nThe `iff` or \"if and only if\" rule is easier to grasp. Written with `↔` or `<->` it basically requires the two truth values to be in sync with each other. If `A := P <-> Q` then if `P` is true or false then `Q` has to be the same thing.\n\nTODO truth table\n\nAnd of course, we can combine these connectives in arbitrarily nested structures!\n\nTODO truth table showing some compound structures\n\nNotice how the connectives can be restated in terms of each other? Like how `P <-> Q` is equivalent to `(P -> Q) & (Q -> P)`? Or `Q` is equivalent to `(P -> Q) & P`? There are lots of these equivalences, which all form basic *tautologies* (formulas that are always true) that can be used when proving things.\n\nTODO list of boolean rules\n\nThis basic form of propositional logic is obviously somewhere at the heart of computing, from binary bits to boolean values. We're all familiar with operators like `!` and `&&` and `||` in common programming languages like rust and java and javascript, and they just represent these rules as computations on boolean values.\n\nBut simple truth values and connectives aren't really enough to prove anything interesting. We don't just want to compute basic formulas from true or false values, we want to be able to prove facts about *things*, from numbers all the way to complex programs. With the simple rules we've been talking about, we can only stick arbitrary human sentences onto variables and then tell the computer if they're true or not. We need something more powerful!\n\nFirst we need **predicates**, which are just functions that accept inputs and return propositions about them. So if we can write a predicate in this general shape: `predicate_name(input1, input2, ...) := some proposition about inputs`, then these are all predicates:\n\n- `And(proposition1, proposition2) := proposition1 & proposition2`\n- `Equal(data1, data2) := data1 == data2` (`data1` is equal to `data2`)\n- `Even(number) := number is even` (I'm cheating here, since this is still just some arbitrary sentence. Let's ignore it for now 😊)\n\nWe're also going to need these two ideas:\n\n- The \"for all\" rule (or *universal quantification*), saying that \"for all values\" some predicate is true when you input the values. Academics use the `∀` symbol for \"for all\", but I'll just write `forall`:\n  - `forall number, Even(number) -> Even(number + 2)` (forall values `number`, if `number` is even then that implies `number + 2` is also even)\n  - `forall data1 data2 data3, data1 == data2 & data2 == data3 -> data1 == data3` (forall values `data1` `data2` `data3`, if `data1` is equal to `data2` and `data2` is equal to `data3`, then that implies `data1` is equal to `data3`)\n\n- The \"exists\" rule, (or *existential quantification*), saying that \"there exists\" a value where some predicate is true when you input the values. Academics use the `∃` symbol for \"there exists\", but I'll just write `exists`:\n  - `exists number, Even(number)` (there exists a `number` such that `number` is even)\n  - `exists data1 data2, data1 == data2` (there exists a `data1` and `data2` such that they are equal to each other)\n\n<!-- TODO need an explanation of why quantification is a reasonable term to use, probably something along the lines that to \"quantify\" something is to give it a name it can be referred to by -->\n\nThe `forall` rule seems especially powerful! It would be extremely useful to prove that something is true about a potentially infinite \"universe\" of values. But how do we actually prove something like that in a programming language? We obviously can't just run all those infinite values through some function and test if returns true, especially since the whole point of using a purely theoretical programming language was that we don't actually have to run programs in order to prove things.\n\nThe crucial trick is to represent our propositions and predicates as *types* instead of data! Let's see how it works.\n\n### Type Theory, the Calculus of Constructions, and the Curry-Howard Correspondence\n\n<!--\n\nin a real computational language, whenever we define a datatype we're really just defining a different way for bits to be laid out in memory. A `struct` isn't a basic primitive concept, it's just a shorthand for putting a few datatypes one after another in a fixed size chunk of memory. Something similar is true for unions, tuples, lists, etc.\n\nBut in a purely logical *type-theoretical* language, when we define a new type we really *are* defining something out of thin air. A new inductive type creates out of nowhere a set of rules for how logical concepts can be combined with each other, rules we've completely *constructed* in some shape that fits our needs. This is why the specific form of type-theory Magmide uses is called The Calculus of Constructions, because constructing new types and values of those types is one of the primitive assumptions of the system. It's one of the axioms.\n\n-->\n\nA reasonable place to start is by figuring out how to represent the basic ideas of true and false in Logic Magmide.\n\nWe might try just defining them as a discriminated union, our old friend `boolean`:\n\nTODO\n\nBut if we walk much further down this path, we won't get anywhere. We'll end up having to actually compute and compare booleans in all our \"proofs\", since `true` and `false` are only different *values*, not different *types*.\n\nThis is a good place to reveal something really important but fairly surprising: if you're a programmer, *then you already prove things whenever you program!* How is that true?\n\nThink about some random datatype such as `u64`. Any time you construct a value of `u64`, you're *proving* that some `u64` exists, or that the `u64` type is *inhabited* (the academic term). The mere act of providing a value of `u64` that actually typechecks as a `u64` very directly proves that it's possible to do so. Put another way, we can say that every concrete `u64` value provides *evidence* of `u64`, evidence that proves the \"proposition\" `u64`. It's a very different way of looking at what a datatype means, but it's true! The only problem with a proof of `u64` is that it isn't a very \"interesting\" or \"useful\" piece of evidence: but it's a piece of evidence nonetheless.\n\n<!-- Specifically we're talking about the `exists` rule, but just that every type defines its own family of values to prove exist. -->\n\nIn the same way, when you define a function, you're creating a *proof* that the input types of the function can somehow be transformed into the output type of the function. For example this function:\n\n```\nfn n_equals_zero n: u8;\n  return n == 0\n```\n\nhas type `u8 -> bool`, so the function *proves* that if we're given a `u8` we can always produce a `bool`. In this way the `->` represents *both* the real computation that will happen *and* the implication operator `P -> Q`! The reason implication and functions are equivalent is exactly because datatypes and propositions are equivalent. Think of this example:\n\n- the implication `P -> Q` has been proven\n- so if `P` can be proven\n- then `Q` can also be proven\n\nTODO truth table\n\nTo convert this into the language of types and programs, we just have to change \"implication\" to \"function\", \"proven\" to \"constructed\", and `P` and `Q` to some types:\n\n- the function `u8 -> bool` has been constructed\n- so if `u8` can be constructed\n- then `bool` can also be constructed\n\nPretty cool huh!\n\nThis simple idea is called the [Curry-Howard Correspondence](https://en.wikipedia.org/wiki/Curry%E2%80%93Howard_correspondence), named after the researchers who discovered it. This is the main idea that allows programs to literally represent proofs.\n\nThe only problem with `u8 -> bool` is that, yet again, it isn't a proof of anything very interesting! The type of this function doesn't actually enforce that the `bool` is even *related* to the `u8` we were given. All these other functions also have the type `u8 -> bool` and yet do completely different things!\n\n```\nfn always_true _: u8;\n  return true\n\nfn always_false _: u8;\n  return false\n\nfn n_equals_nine n: u8;\n  return n == 9\n```\n\nThe simple type of `bool` only gives us information about one value, and can't encode *relationships between* values.\n\nBut if we enhance our language with *dependent types*, we can start doing really interesting stuff. Let's start with a function whose *type* proves if its input is equal to 5. We've already introduced asserted types, so let's define our own type to represent that idea (this isn't the right way to do this, we'll improve it in a second). Let's also write a function that uses it.\n\n\n\n\n\n\nIt's even possible to define types that *can't possibly* be constructed, such as an empty union: `type Empty; |`. When you try to actually create a value of `Empty`, you can't possibly do so, meaning that this type is impossible or \"False\".\n\n\nBut what about this definition of `True`?\n\nTODO\n\n---\n\n\n\n\n\n\n\n\n\n\nRepresenting propositions as datatypes and theorems as functions means that we don't have to *run* the code to \"compute\" the truth value of variables. We only have to *type check* the code to make sure the types are all consistent with each other. If the type of function asserts that it can change input propositions into some output proposition, and the body of the function successfully typechecks by demonstrating steps that do in fact break a piece of propositional data apart and transform it to create a piece of propositional data with a different type, then the very existence of that function proves that the input propositions all *imply* the output proposition.\n\n### Proofs using Tactics\n\n## Verified Compute Magmide\n\n### Separation Logic\nhttps://cacm.acm.org/magazines/2019/2/234356-separation-logic/fulltext\n\n### Separation Logic in Use\n\n### Logical Modeling\n\n### Testing and Conjecture Checking\n\n### Trackable Effects\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n---\n\n## Basics of Compute Magmide\n\nFirst let's get all the stuff that isn't unique about Magmide out of the way. Magmide has both a \"Computational\" language and a \"Logical\" language baked in. The Computational language is what we use to write programs that will actually run on some machine, and the Logical language is used to logically model data and prove things about programs.\n\nThe Computational language is technically defined as a raw assembly language, but since programming in assembly is extremely tedious, the default way to write computational code is with `core`, a language a lot like Rust.\n\n```magmide\n// single line comments use double-slash\n//\n  comments can be indented\n  so you can write them\n  across as many lines\n  as you like!\n\n  `core` is whitespace sensitive,\n  so indentation is used to structure the program\n\n// the main function is the entry point to your program\nfn main;\n  // immutable variables are declared with let:\n  let a = 1\n  // a = 2 <-- compiler error!\n  // variables can be redeclared:\n  let a = 2\n\n  // types are inferred, but you can provide them:\n  let b: u8 = 255\n\n  // mutable variables are declared with mut:\n  mut c = 2\n  c = 1\n  // and they can be redeclared:\n  mut c = 3\n  let c = 2\n\n  // booleans\n  let b: bool = true\n\n  // standard signed and unsigned integers\n  let unsigned_8_bits: u8 = 0\n  let signed_8_bits: i8 = 0\n\n  // floating point numbers\n  let float_32_bits: f32 = 0.0\n  let float_64_bits: f64 = 0.0\n\n  // arrays\n  // slices\n  // tuples\n\n  // core is literally just a \"portable assembly language\",\n  // so it doesn't have growable lists or strings by default!\n  // think of core in the same way as `no_std` in rust\n  // we hope rust itself will someday be reimplemented and formally verified using magmide!\n\n// the type system is very similar to rust\n// you can declare type aliases:\nalias byte; u8\n\n// you can declare new nominal types as structs:\ndata User;\n  id: usize\n  age: u8\n  active: bool\ndata Point; x: f64, y: f64\n\n// or unit\ndata Token\ndata Signal\n\n// or tuples\ndata Point; f64, f64\ndata Pair;\n  i32\n  i32\n\n// or discriminated unions\ndata Event;\n  | PageLoad\n  | PageUnload\n  | KeyPress; char\n  | Paste; [char]\n  | Click; x: i64, y: i64\n// on which you can use the match command\nfn use_union event: Event;\n  match event;\n    PageLoad;\n    PageUnload;\n    Click x, y;\n    _; ()\n\n  // the is operator is like \"if let\" in rust\n  if event is KeyPress character;\n    print character\n  // and you can use it without destructuring?\n  if event is Paste;\n    print event.1\n\n// and they can be generic\ndata Pair T, U; T, U\ndata NonEmpty T;\n  first: T\n  rest: [T]\n```\n\nBut the entire point of Magmide is that you can prove your program is correct! How do we do that?\n\n\n\nThe most common and simplest way we can make provable assertions about our programs is by making our types *asserted types*. If we want to guarantee that a piece of data will always meet some criteria, we can make assertions about it with the `&` operator. Then, any time we assign a value to that type, we have to fulfill a *proof obligation* that the value meets all the assertions of the type. More on proofs in a second.\n\n```\n// this type will just be represented as a normal usize\n// but we can't assign 0 to it\nalias NonZero; usize & > 0\n\n// we can add as many assertions as we like\n// even using generic values in them\nalias Between min max; usize & >= min & <= max\n\n// the & operator essentially takes a single argument function,\n// so we can use the lambda operator\nalias Above min; usize & |> v; v > min\n\n// or we can use the \"hole\" _ to indicate the value in question\nalias Above min; usize & _ > min\n\n// this works for tuples and structs too\n// \"^\" can be used to refer to the parent datatype\n// so fields of a type can refer to other fields\nalias Range; i32, i32 & > ^.0\n\ndata Person;\n  age: u8\n  // the value of is_adult\n  // has to line up with .age >= 18\n  is_adult: bool & == ^.age >= 18\n  // this pattern of requiring a bool\n  // to exactly line up with some assertion\n  // is common enough to get an alias\n  is_adult: bool.exact (^.age >= 18)\n```\n\nSo how do we actually prove that our program actually follows these data assertions? Can the compiler figure it out by itself? In many simple situations it actually can! But it's literally impossible for it to do so in absolutely all situations (if it could, the compiler would be capable of solving any logical proof in the universe!).\n\nTo really understand how this all works, we have to get into the Logical side of Magmide, and talk about `Ideal`, `Prop`, and The Calculus of Constructions.\n\n## Logical Magmide\n\nFirst let's start with the `Ideal` type family. `Ideal` types are defined to represent *abstract*, *logical* data. They aren't intended to be encoded by real computers, and their only purpose is to help us define logical concepts and prove things about them. To go with the `Ideal` type family is a whole separate programming language, one that's *pure* and *functional*. Why pure and functional? Simply, pure and functional languages relate directly to mathematical type theory (mathematical type theory is nothing but a pure and functional language!). It's much easier to define abstract concepts and prove things about them in pure and functional settings than the messy imperative way real computers work. Otherwise we'd have to deal with distracting details about memory layout, bit representation, allocation, etc. The \"programs\" we write in this pure functional language aren't actually intended to be run! They just define abstract algorithms, so we only care about them for their type-checking behavior and not their real behavior.\n\nThe type system of logical Magmide is shaped a lot like computational Magmide to make things convenient. But the big difference between types in logical Magmide and computational magmide is how they handle type recursion.\n\n```\n// in computational Magmide types must be representable in bits and have a finite and knowable size,\n// meaning they can't reference themselves without some kind of pointer indirection\ndata NumLinkedList T;\n  item: T\n  next: *(NumLinkedList T)?\n\n// but in logical magmide there's no such restriction, since these types are abstract and imaginary\nidl List T;\n  item: T\n  next: List T\n```\n\n\nLogical Magmide only really needs three things to prove basically all of mathematics and therefore model computation and prove programs correct:\n\n- Inductive types, which live in one of two different type \"sorts\":\n  - `Ideal`, the sort for \"data\" (even if it's abstract imaginary data).\n  - `Prop`, the sort for \"propositions\", basically assertions about data.\n- Function types.\n\n\n---\n\n```\ndata equal_5 number;\n  | yes & (number == 5)\n  | no & (number != 5)\n\nfn is_equal_5 number: u8 -> equal_5 number;\n  if number == 5; return yes\n  else; return no\n```\n\nPretty simple! The ability to *dependently* reference the input `number` in the output type makes this work. And in this case, because the assertions we're making are so simple, the compiler is able to prove they're consistent without any extra work from us. The compiler would complain if we tried to create a function that *wasn't* obviously consistent:\n\n```\nfn is_equal_5 number: u8 -> equal_5 number;\n  if number == 6; return yes\n  else; return no\n// compiler error!\n```\n\nBut we don't have to define our own `equal_5` type, we can just use `bool`, which can already generically accept assertions. The same is also true for a few other standard library types like `Optional` and `Fallible` that are commonly used to assert something about data.\n\n```\n// bool can take a true assertion and a false assertion\nfn is_equal_5 number: u8 -> bool (number == 5) (number != 5);\n  return number == 5\n\n// we can also use the alias for this concept\nfn is_equal_5 number: u8 -> bool.exact (number == 5);\n  return number == 5\n```\n\nBut something about the above assertions like `number == 5` and `number != 5` might bother you. If proofs are *just data*, then where do `==` and `!=` come from? Are they primitive in the language? Or are they just data as well?\n\nThey are actually just data! But specifically, they're data that live in the `Prop` sort. `Prop` is the sort defined to hold logical assertions, and the rules about how it can be used make it suited for that task.\n\nLet's define a few of our own `Prop` types to get a feel for how it works.\n\n```\n// in the computational types, unit or `()` is the \"zero size\" type,\n// a type that holds no data\nalias UnitAlias; ()\ndata MyUnit\n\n// we can define a prop version as well!\nprop PropUnit\n// this type is \"trivial\", since it holds no information and can always be constructed\n// in the standard library this type is called \"always\", and in Coq it's called \"True\"\nalias AlwaysAlias; always\n\n// we already saw an example of a computational type that can't be constructed\ndata Empty; |\n// and of course we can do the same in the prop sort\nprop PropEmpty; |\n// this type is impossible to construct, which might seem pointless at first, but we'll see how it can be extremely useful later\n// in the standard library it's called \"never\", and in Coq it's called \"False\"\nalias NeverAlias; never\n```\n\nOkay we have prop types representing either trivial propositions or impossible ones. Now let's define ones to encode the ideas of logical \"or\", \"and\", \"not\", \"exists\", \"forall\", and equality.\n\n```\n// logical \"and\", the proposition that asserts the truth of two child propositions,\n// is just a tuple! a tuple that holds the two child propositions as data elements\n// we have to present a proof of both propositions in order to prove their \"conjunction\",\n// which is the academic term for \"and\"\nprop MyAnd P: prop, Q: prop; P, Q\n\n// then we use this constructor just like any other\ndef true_and_true: MyAnd always always = MyAnd always always\n\n// we could of course also structure it as a record,\n// but the names aren't really useful (which is why we invented tuples right?)\nprop MyAnd P: prop, Q: prop;\n  left: P\n  right: Q\n\n// logical \"or\", the proposition that asserts the truth of either one child proposition or another,\n// is just a union!\n// we only have to present a proof for one of the propositions in order to prove their \"disjunction\",\n// which is the academic term for \"or\"\nprop MyOr P: prop, Q: prop;\n  | left; P\n  | right; Q\n\ndef true_or_true_left: MyOr always always = MyOr.left always\ndef true_or_true_right: MyOr always always = MyOr.right always\n\n// logical \"not\" is a little more interesting. what's the best way to assert some proposition *isn't* true? should we say it's equal to \"never\"? that doesn't really make sense, since you'll see in a moment that \"equality\" is just an idea we're going to define ourselves. instead we just want to prove that this proposition behaves the same way as \"false\", in the way that it's impossible to actually construct a value of it.\n// The most elegant way is to say that if you *were* able to construct a value of this proposition, we would *also* be able to construct a value of \"false\"! So \"not\" is just a function that transforms some proposition value into \"false\".\n// notice that we don't need to create a new type for this, since MyNot is just a function\nalias MyNot P: prop; P -> False\n```\n\nEquality is interesting. Depending on exactly what you're doing, you could define what it means for things to be \"equal\":\n\n- Two byte-encoded values are equal if their bitwise `xor` is equal to 0.\n- Two values are equal if any function you pass them to will behave exactly the same with either one.\n- A value is only equal with exactly itself.\n\nIn Logical magmide, since all values are \"ideal\" and not intended to actually ever exist, the simplest definition is actually that last one: a value is only equal with exactly itself.\n\n```\nprop MyEquality {T: Ideal} -- T, T;\n  @t -> MyEquality t t\n```\n\n\n\n\n\n\n\nLogical \"forall\" is the most interesting one, since it's the only one that's actually defined as a primitive in the language. We've actually been using it already! You might be surprised to learn that the function arrow `->` is just the same as \"forall\"! It's just a looser version of it.\n\nIn type theory, if we want to provide a proof that \"forall objects in some set, some property holds\", we just have to provide a *function* that takes as input one of those objects and returns a proof (which is just a piece of data) that it has that property. And of course it can take more than one input, any of which can be proof objects themselves.\n\nSo how do you actually write a \"forall\" type? Since `forall` is such an important concept, its magmide syntax very concisely uses `@`. Here's how you would write the type for the `is_equal_5` function we wrote earlier: `@ number: u8 -> bool.exact number == 5`. I prefer to read this as: \"given any `number` which is a `u8`, I'll give you a `bool` which exactly equals whether `number` is equal to 5\". For functions that take multiple \"forall\" variables as inputs (the academic term for accepting a forall variable as input is to \"universally quantify\" the variable, since a forall assertion proves something universal), you use commas instead of arrows between them: `@ n1, n2 -> bool.exact n1 == n2`.\n\nVery importantly, in order for that function to *really* prove the assertion, it has to be infallible (it can't fail or crash on any input) and provably terminating (it can't infinitely loop on any input). It is allowed to require things about the input (for example a function can be written to only accept even integers rather than all integers), but it has to handle every value it makes an assertion about.\n\n\n\n\n\n```\n// since we're providing default assertions,\n// the normal form of bool only asserts the useless `always`\ntype bool when_true = always, when_false = always;\n  | true & when_true\n  | false & when_false\n\n// you'll notice we don't bother with an assertion for T\n// since the user can just provide an asserted type themselves\ntype Optional T, when_none = always;\n  | Some; T\n  | None & when_none\n\n// same thing here\ntype Fallible T, E;\n  | Ok; T\n  | Err; E\n```\n\n\nUnderstanding indexed types\nIn a language with normal generics, if there are multiple functions or constructors in the type that all use a generic variable then when those functions or constructors are actually used all the instances of those generic variables to have to be equal. You can get that exact same behavior in magmide, but you can *also* get more \"dependent\" versions of generic variables which are allowed to be different.\nThis is useful in many situations, but it's best to start with two examples.\n\nnormal polymorphic lists, to understand the normal situation, and how it would be annoying or inconsistent to allow different values of the generic variable. forcing them on the left side basically allows us to elide any mention of them and still keep the requirement of them aligning\n\nlength indexed lists\nattestations of zero or one-ness\neven numbers\n\n\nin the case of the `even` predicate, the different constructors all provide evidence of the same proposition `even`, but they do so for different numbers.\n\n\n\n\n\n\nThe key insight is in understanding *The Curry-Howard Correspondence* and the concept of *Proofs as Data*. These are a little mind-bending at first, but let's go through it.\n\nA good way to understand this is to see *normal* programming in a different light.\n\n\nBasically, any type that lives in the `Prop` sort is *by definition* a type that represents a truth value, a logical claim.\n\nProofs are *all* defined by constructors for data, it's just data that lives in a special type sort, specifically the type sort `Prop`. First we define some type that defines data representing some logical claim, and then when we want to actually *prove* such a claim, we just have to *construct* a value of that type!\n\nIt's important to notice though that this wouldn't be very useful if the type system of our language wasn't *dependent*, meaning that the type of one value can refer to any other separate value in scope. When we put propositions-as-data together with dependent types, we have a proof checker.\n\nThis is the key insight. When we make any kind of logical claim, we have to define *out of nowhere* what the definition of that claim is.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n---\n\nTheorem proving\n\nIn normal programming, we usually follow a pattern of writing out code, then checking it, and then filling in the gaps, and we don't really need any kind of truly interactive system to help us as we go.\n\nBut writing proofs, even though it's technically the exact same as just writing code, isn't as easy to do in that way. When we're trying to solve a proof it's difficult to keep in mind all the facts and variables we've assumed existed at that particular stage, and we often have to move forward step by step.\n\nThis is why theorem proving is most often done with *interactive tactics*. Instead of writing out all or most of the code as we might for a purely computational function, we instead enter an interactive session with the compiler, where it shows us what we have available to us and what we have left to prove.\n\nIn `magmide`, we do this using the `magmide tacticmode` command, or with a variety of editor plugins that use `tacticmode` under the hood. Say we want to prove something about numbers, maybe that addition of natural numbers is symmetric. First we write out our definition of that theorem:\n\n```\nthm addition_is_symmetric: @ (x, y): nat -> x == y <-> y == x;\n```\n\nthen give the command `magmide tacticmode addition_is_symmetric` (or use the equivalent start command of our editor), and magmide will find that definition, parse and type check any definitions it depends on, and enter interactive tactic mode. It shows the *context*, or the variables this definition takes as inputs, and the *goal*, which is basically the type of the thing we're trying to prove. Remember, a theorem is a thing whose final output lives in the `Prop` sort, whether that be a piece of data that lives in `Prop` or a function that returns something in `Prop`. So when we \"prove\" a theorem we're really just constructing a piece of code!\n\nTo really make clear that theorems are just code, let's actually write out a theorem manually!\n\nOr let's define a *computational* function using tactics.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n---\n\n## Abstract assembly language\n\nHere's an example of a Magmide program in the \"core\" assembly language that is general enough to compile to basically any architecture.\n\nMagmide itself ships with a few different layers of computational language:\n\n- Thinking about computation in general.\n- Thinking about a specific collection of generalizable instructions in an unknown machine. This means you're reasoning about specific calling conventions.\n- Thinking about blocks with \"function arguments\"\n\n- `asm_zero`: a truly representational abstract assembly language, used to define the \"common core\" of all supported architectures\n- `asm_one`: the next level of abstraction, with type definitions and sugars, llvm style functions along with call/ret instructions. must instantiate with a calling convention\n- `core`: a c-like language with normal functions, match/if/for/while/loop/piping structures, functions for malloc/free, but no ownership/lifetime stuff. must instantiate with a calling convention and definitions for malloc/free in the desired environment\n- `system_core`: same c-like language, but with assumptions of \"system\" calls for thread spawn/join, io, async, etc\n\n\nThere can be a `call` instruction that takes a label or address and together with a `ret` instruction abstracts away the details of a calling convention. We assume it does whatever is necessary under the calling convention to set the return address and push arguments to wherever they go.\n\n\nhttps://cs61.seas.harvard.edu/site/2018/Asm1/\nall labels and global static data are accessed relative to the current instruction pointer (or at least they should be to produce a safe position independent executable). so when assembling to machine code, the distance between an instruction accessing something and that thing is computed\n\nhttps://cs61.seas.harvard.edu/site/2018/Asm2/\nA `push X` instruction pushes the variable X onto the stack, which changes the stack pointer (either up or down depending on the stack convention, and it will mostly do this the same amount since X must fit into a word??) and moves `X` into that location\n\nso `push` (`pop` is opposite)\n\n```\nadd 8, %rsp // could be sub if stack grows downwards\nmov X, (%rsp)\n```\n\n\n\n```\ndefine i32 @add1(i32 %a, i32 %b) {\nentry:\n  %tmp1 = add i32 %a, %b\n  ret i32 %tmp1\n}\n\ndefine i32 @add2(i32 %a, i32 %b) {\nentry:\n  %tmp1 = icmp eq i32 %a, 0\n  br i1 %tmp1, label %done, label %recurse\n\nrecurse:\n  %tmp2 = sub i32 %a, 1\n  %tmp3 = add i32 %b, 1\n  %tmp4 = call i32 @add2(i32 %tmp2, i32 %tmp3)\n  ret i32 %tmp4\n\ndone:\n  ret i32 %b\n}\n```\n\nperhaps given all arguments then conclusion is a better syntax for implications, for readability\n\n\n\n\n---\n\ntypes are algebraic because they follow algebra-like rules\n\nif False/never is 0 and True/always/unit is 1, then treating \"tupling\" like * and \"unioning\" like + has the same characteristics as with numbers\n\nanything tupled with never is equivalent to never\nanything unioned with never is equivalent to the original (since that arm of the union can't actually be constructed)\nanything tupled with always is equivalent to the original (the always adds no new information)\nanything unioned with always is just a thing with one more arm\n\nboth tupling and unioning are \"logically\" commutative and associative (real code would have to access them differently, but that's the only difference, the information they hold is the same. it can be proven that any function taking them as input could always be transformed to a logically equivalent version)\nunioning is distributive over tupling (tupling something with a union is the same as tupling each arm of the union with that thing)\n\n(I think a partially commutative monoid with distributivity)\n"
  },
  {
    "path": "posts/iris-in-plain-terms.md",
    "content": "# A No Nonsense Introduction to the Iris Separation Logic\n\nThe Iris Separation Logic is amazing, and I believe it's going to be the secret at the heart of the next generation of truly correct software. The only problem is that it isn't explained in a way *practicing software engineers* can understand. It's explained *extremely* well for programming language researchers (in fact unusually well), but it's still not quite good enough to cross the mainstream line:\n\n- There isn't a language engineers can actually *use* to play around with and gain an understanding of Iris, so it remains locked in papers.\n- The papers describing Iris gloss over math concepts like partial commutative monoids and extension orders and reflexivity. A full explanation of Iris would go over these things first or enough during.\n- The papers use difficult to parse and arbitrary symbology rather than helpful \"graspable\" names, so it's difficult to orient oneself while reading it.\n"
  },
  {
    "path": "posts/toward-termination-vcgen.md",
    "content": "we want to do this rather than simply think at a higher level of abstraction for two reasons:\n\n- we want the power and flexibility to prove termination of more exotic programs that can't be represented with simple functions (goto statements, algebraic effects, jumping to dynamic code, etc)\n- we need a foundation of reasoning *upon which to build those higher levels of abstraction!* the whole point of magmide is that we're extending our formal understanding as far down as we possibly can, so that we can combine upwards with confidence we haven't merely assumed something without cause.\n\n\n\nin order to prove an assembly language program terminates, we have to present two things:\n\n- some well-founded relation that we can somehow convert a machine state into (the relation could be directly over machine states)\n- a proof that the assembly step relation will, for our particular program, always make \"progress\" in the well-founded relation, by moving with each instruction into a machine state that is \"less than\" the previous one with regard to the relation\n\nA natural way to do this is by doing the following:\n\n- chop up our program into labeled blocks, much like the \"basic blocks\" of llvm (although we don't have to be as perfectly strict as those)\n- create a directed graph of the program with these labeled blocks as nodes, and possible jumps as edges\n- define a block that is intended to be the \"starting\" block of execution, and create an artificial \"start\" node representing any position which could in a well-formed way jump to our program, and create an edge from the start block to our real start block (an environment could choose to load our program and jump to some arbitrary instruction, but it makes sense to just require as a precondition of correct execution that a predefined starting instruction is used instead) (we likely want to simply declare some instruction as the defined starting point, and package this instruction together with the program to define the program as a broader \"executable\", or even just rearrange the program so that the desired starting instruction is always the very first one)\n- include an artificial \"stop\" node and create edges to it from all blocks that stop execution\n- find all the strongly connected components of the graph with an algorithm such as tarjan's. with the DAG of components, topographically order them according to their maximum distance from stop. this maximum distance number forms the first and highest index of our lexicographic ordering\n- now for each component we have to find a well-founded ordering. if each component is truly strongly connected (isn't just a non-recursive single node), then the programmer will likely have to provide a well-founded ordering, but we can go a little farther.\n\nin each component, we can do a similar exercise:\n\n- find the nodes that are actually jumped to from nodes outside the component, and create a new artificial \"start\" node that points to them\n- find the nodes that have jumps exiting the component, and create a new artificial \"stop\" node they point to\n- go through and find a maximum distance from stop for each node. this number won't necessarily create a topographical order, but it does create topographic *classes* we can use\n- go through each topographic class, and find out if the class itself has any strongly connected subdivisions. if there are any subdivisions within the class then we can create a topographic ordering within the class\n- every strongly connected component within a class needs to have a well-founded ordering provided by the programmer, and a proof that that ordering is decremented by the time execution leaves the class component\n\n*cleverly*, we only have to flag an edge for justification within the same distance class (that isn't a self-edge) if the distance class isn't itself strongly connected.\nalso, it will *probably* be a better and more pleasant or correct experience if self-recursive nodes just get their own separate well-founded ordering and each individual recursive edge needs to be justified along that ordering. *MAYBE??* I guess if the work done by self-edges is related to the total progress of the whole component then they can just somehow incorporate the structures being progressed by self-edges into the structure being progressed by the component.\n\n\nwhat we're trying to do is fill in all the \"obvious\" portions. we want to only make them provide an *interesting* ordering that somehow relates to the semantics of their program, and then only make them justify steps in the program that really do truly need to respect that ordering. any tedious book-keeping we can do for them we should try to do\n\n\nthings to watch out for:\n\n- \"trapped jumps\", any kind of jump that refers to the label of itself. it can be proven that if a program ever is in a state such that an instruction jumps to itself, the program will be permanently stuck on that instruction, and will never make any kind of useful progress in any well-founded relation. the machine state will never change, since even the program counter will remain the same\n\n\n\n\n\n\n\n\nlexicographic orderings have \"higher priority\" indices\n\na program is a list of labeled sections\nwe can go over that list and produce a directed graph of all instructions that go from one labeled section to another:\n- obviously branching instructions that go to a label count, even ones that go to the same labeled section since that's a recursive branch\n- any possibly sequential instructions at the *end* of a section go to the *next* section, so they also count\n\nfrom this graph, we can produce a list of strongly connected components, and the network of strongly connected components forms a DAG\nthis DAG from the single starting instruction to all possible exit nodes (nodes that include an exit instruction) is well-founded, since we're decreasing the current maximum distance from an exit node. this forms the first and highest priority index in our total lexicographic order\nthe case of non-recursive single-node components is trivial, since these aren't really strongly connected, and always first move sequentially through the section before always progressing along the DAG\n\nwith this, we can prove termination if we're given a progress type/relation/function/proof for each component\nto narrow the instructions who need to be justified, we can look at each strongly connected component, and topographically order the nodes according to their maximum distance from an exit node (any node that exits the component)\nwhen they're ordered like this, we can imagine them as a DAG again by \"severing\" the \"backwards\" edges, ones that go toward a topographically lower node\nthen we can supply a lexicographical ordering for this component by just push *their* decreasing type on the front of the same ordering we would produce for a *real* DAG. their supplied progress type will have the highest priority, since it represents the entire chunk of work the component is trying to do, and the rest of the ordering just handles all the boring book-keeping as we go along through this \"severed\" DAG.\nwe give to them obligations that the \"backwards\" or recursive edges (or Steps) do in fact make progress.\nit will probably be necessary for sanity's sake to simply require a proof that the progress indicator gets decreased *sometime* before any backward edge\n\nor we need an even higher version of Steps, one that encodes program progression across section nodes rather than individual instructions. probably the final version requires us to prove that if a progression relation across section nodes is well-founded, then the underlying step progression is as well\n\n```v\n  forall (T: Type) (progress: T -> T -> Prop) (convert: MachineState -> T), well_founded progress\n  forall cur next, Step cur next -> Within cur component -> Within next component -> progress (convert next) (convert cur)\n```\n\nso if we exit the segment, we've made progress\nwithin the segment we can just say we're making sequential progress?\n\n\n\n\nprobably to prove a jump to dynamic code will terminate or just behave properly, we need to have the programmer provide a list of resumption locations in the current graph (which could be the exit location!) and prove the code they're jumping to will in fact only exit itself by going to those known places\nthey also need to prove that somehow the unknown code has been itself checked for well-formedness and absence of unfulfilled proof obligations to justify jumping to it and still keeping clean trackable effects\njumping to unchecked code violates *all* registered trackable effects, since the unchecked code could do literally anything it wants.\n"
  },
  {
    "path": "posts/what-is-magmide.md",
    "content": ""
  },
  {
    "path": "src/ast.rs",
    "content": "#[derive(Debug, Eq, PartialEq)]\npub enum TypeBody {\n\tUnit,\n\tUnion { branches: Vec<String> },\n\t// Tuple {  },\n\t// Record { fields: Vec<(String, String)> },\n}\n\n// #[derive(Debug, Eq, PartialEq)]\n// pub enum Statement {\n// \t// Use(UseTree),\n// \tLet(LetStatement),\n// \tDebug(DebugStatement),\n// \tNamed(ModuleItem),\n// }\npub type Statement = Term;\n\n#[derive(Debug, Eq, PartialEq)]\npub struct TypeDefinition {\n\tpub name: String,\n\tpub body: TypeBody,\n}\n\n#[derive(Debug, Eq, PartialEq)]\npub struct ProcedureDefinition {\n\tpub name: String,\n\tpub parameters: Vec<(String, String)>,\n\tpub return_type: String,\n\tpub statements: Vec<Statement>,\n}\n\n// #[derive(Debug)]\n// pub enum Statement {\n// \tBare(Term),\n// \tLet(),\n// \t// Module\n// }\n// TODO\n// pub type Statement = Term;\n\n#[derive(Debug, Eq, PartialEq)]\npub struct LetStatement {\n\t// pub pattern: Pattern,\n\tpub pattern: String,\n\tpub type_declaration: Option<Term>,\n\tpub term: Term,\n}\n\n#[derive(Debug, Eq, PartialEq)]\npub struct DebugStatement {\n\tpub term: Term,\n}\n\n\n\n#[derive(Debug, Eq, PartialEq)]\npub enum ModuleItem {\n\tType(TypeDefinition),\n\tProcedure(ProcedureDefinition),\n\tDebug(DebugStatement),\n}\n\n#[derive(Debug, Eq, PartialEq)]\npub enum Term {\n\tLone(String),\n\tChain(String, Vec<ChainItem>),\n\tMatch {\n\t\tdiscriminant: Box<Term>,\n\t\tarms: Vec<MatchArm>,\n\t},\n}\n\n#[derive(Debug, Eq, PartialEq)]\npub struct MatchArm {\n\t// pattern: Pattern,\n\tpub pattern: Term,\n\t// statements: Vec<Term>,\n\tpub statement: Term,\n}\n\n#[derive(Debug, Eq, PartialEq)]\npub enum ChainItem {\n\tAccess(String),\n\tCall { arguments: Vec<Term> },\n\t// // IndexCall { arguments: Vec<Term> },\n\t// // TODO yikes? using a complex term to return a function that's called freestanding?\n\t// FreeCall { target: Term, arguments: Vec<Term> },\n\t// // tapping is only useful for debugging, and should be understood as provably not changing the current type\n\t// CatchCall { parameters: Either<NamedPattern, Vec<NamedPattern>>, statements: Vec<Term>, is_tap: bool },\n\t// ChainedMatch { return_type: Term, arms: Vec<MatchArm> },\n}\n"
  },
  {
    "path": "src/checker.rs",
    "content": "// http://adam.chlipala.net/cpdt/html/Universes.html\nuse std::collections::{HashMap};\n\nuse crate::ast::*;\npub type Ident = String;\n// a simple identifier can refer either to some global item with a path like a type or function (types and functions defined inside a block of statements are similar to this, but don't have a \"path\" in the strict sense since they aren't accessible from the outside)\n// or a mere local variable\n#[derive(Debug, Clone)]\npub struct Scope<'a> {\n\tscope: HashMap<&'a Ident, ScopeItem<'a>>,\n}\n\n// TODO should probably manually implement PartialEq in terms of pointer equality (std::ptr::eq)\n#[derive(Debug, PartialEq, Clone)]\npub enum ScopeItem<'a> {\n\t// Module(Module),\n\tType(&'a TypeDefinition),\n\tProcedure(&'a ProcedureDefinition),\n\t// Prop(PropDefinition),\n\t// Theorem(TheoremDefinition),\n\t// Local(Term),\n\tData(Data),\n\tPlaceholder,\n}\n\ntrait DebugDisplay {\n\tfn debug_display(&self) -> String;\n}\n\nimpl<'a> DebugDisplay for ScopeItem<'a> {\n\tfn debug_display(&self) -> String {\n\t\tmatch self {\n\t\t\tScopeItem::Type(type_definition) => type_definition.name.clone(),\n\t\t\tScopeItem::Procedure(procedure_definition) => procedure_definition.name.clone(),\n\t\t\t// ScopeItem::Prop(PropDefinition),\n\t\t\t// ScopeItem::Theorem(TheoremDefinition),\n\t\t\t// ScopeItem::Local(Term),\n\t\t\tScopeItem::Data(data) => format!(\"{}.{}{}\", data.type_path, data.name, data.body.debug_display()),\n\t\t\tScopeItem::Placeholder => unimplemented!(),\n\t\t}\n\t}\n}\n\n#[derive(Debug, PartialEq, Clone)]\npub struct Data {\n\tpub name: String,\n\tpub type_path: String,\n\tpub body: Body,\n}\n\n#[derive(Debug, PartialEq, Clone)]\npub enum Body {\n\tNotConstructed,\n\tUnit,\n\t// Tuple,\n\t// Record,\n}\n\nimpl DebugDisplay for Body {\n\tfn debug_display(&self) -> String {\n\t\t\"\".into()\n\t}\n}\n\nimpl<'a> Scope<'a> {\n\tpub fn new() -> (Scope<'a>, Ctx) {\n\t\t(Scope{ scope: HashMap::new() }, Ctx{ errors: Vec::new(), debug_trace: Vec::new() })\n\t}\n\n\tpub fn from<const N: usize>(pairs: [(&'a Ident, ScopeItem<'a>); N]) -> (Scope<'a>, Ctx) {\n\t\t(Scope{ scope: HashMap::from(pairs) }, Ctx{ errors: Vec::new(), debug_trace: Vec::new() })\n\t}\n\n\tpub fn checked_insert(&mut self, ctx: &mut Ctx, ident: &'a Ident, scope_item: ScopeItem<'a>) -> Option<()> {\n\t\tmatch self.scope.insert(ident, scope_item) {\n\t\t\tSome(_) => {\n\t\t\t\tctx.add_error(format!(\"name {ident} has already been used\"));\n\t\t\t\tNone\n\t\t\t},\n\t\t\tNone => Some(()),\n\t\t}\n\t}\n\n\tpub fn type_check_module(&mut self, ctx: &mut Ctx, module_items: &'a Vec<ModuleItem>) {\n\t\tfor module_item in module_items {\n\t\t\tself.name_pass_type_check_module_item(ctx, module_item);\n\t\t}\n\n\t\tfor module_item in module_items {\n\t\t\tself.main_pass_type_check_module_item(ctx, module_item);\n\t\t}\n\t}\n\n\tpub fn name_pass_type_check_module_item(&mut self, ctx: &mut Ctx, module_item: &'a ModuleItem) {\n\t\tmatch module_item {\n\t\t\tModuleItem::Type(type_definition) => {\n\t\t\t\tself.checked_insert(ctx, &type_definition.name, ScopeItem::Type(type_definition));\n\t\t\t},\n\t\t\tModuleItem::Procedure(procedure_definition) => {\n\t\t\t\tself.checked_insert(ctx, &procedure_definition.name, ScopeItem::Procedure(procedure_definition));\n\t\t\t},\n\t\t\tModuleItem::Debug(_) => {},\n\t\t}\n\t}\n\n\tpub fn main_pass_type_check_module_item(&self, ctx: &mut Ctx, module_item: &ModuleItem) {\n\t\tmatch module_item {\n\t\t\tModuleItem::Type(type_definition) => {\n\t\t\t\tself.type_check_type_definition(ctx, type_definition);\n\t\t\t},\n\t\t\tModuleItem::Procedure(procedure_definition) => {\n\t\t\t\tself.type_check_procedure_definition(ctx, procedure_definition);\n\t\t\t},\n\t\t\tModuleItem::Debug(debug_statement) => {\n\t\t\t\tself.type_check_debug_statement(ctx, debug_statement);\n\t\t\t},\n\t\t}\n\t}\n\n\tpub fn type_check_type_definition(&self, _ctx: &mut Ctx, type_definition: &TypeDefinition) {\n\t\tmatch &type_definition.body {\n\t\t\tTypeBody::Unit => {},\n\t\t\tTypeBody::Union { branches } => {\n\t\t\t\tfor _branch in branches {\n\t\t\t\t\t// TODO these are just strings now, but there will be work to do soon\n\t\t\t\t}\n\t\t\t},\n\t\t}\n\t}\n\tpub fn type_check_procedure_definition(&self, ctx: &mut Ctx, procedure_definition: &'a ProcedureDefinition) {\n\t\tlet mut procedure_scope = self.clone();\n\t\tprocedure_scope.type_check_parameters(ctx, &procedure_definition.parameters);\n\t\tlet statements_type = procedure_scope.type_check_statements(ctx, &procedure_definition.statements);\n\t\tlet return_type = self.checked_get(ctx, &procedure_definition.return_type);\n\n\t\tmatch (statements_type, return_type) {\n\t\t\t(Some(statements_type), Some(return_type)) => {\n\t\t\t\tself.checked_assignable_to(ctx, &statements_type, return_type);\n\t\t\t},\n\t\t\t_ => {},\n\t\t}\n\t}\n\n\tpub fn type_check_parameters(&mut self, ctx: &mut Ctx, parameters: &'a Vec<(String, String)>) {\n\t\tfor (param_name, param_type) in parameters {\n\t\t\tlet param_type = self.checked_get(ctx, param_type).unwrap_or(&ScopeItem::Placeholder).clone();\n\t\t\tself.checked_insert(ctx, param_name, param_type);\n\t\t}\n\t}\n\n\tpub fn type_check_statements(&mut self, ctx: &mut Ctx, statements: &Vec<Statement>) -> Option<ScopeItem<'a>> {\n\t\tlet mut statements_type = None;\n\t\tfor statement in statements {\n\t\t\tstatements_type = self.type_check_term(ctx, statement);\n\t\t}\n\t\tstatements_type\n\t}\n\tpub fn reduce_statements(&mut self, ctx: &mut Ctx, statements: &Vec<Statement>) -> Option<ScopeItem<'a>> {\n\t\tlet mut current_item = None;\n\t\tfor statement in statements {\n\t\t\tcurrent_item = Some(self.reduce_term(ctx, statement)?);\n\t\t}\n\t\tcurrent_item\n\t}\n\n\tpub fn checked_assignable_to(&self, ctx: &mut Ctx, proposed: &ScopeItem<'a>, demanded: &ScopeItem<'a>) {\n\t\t// TODO this will be more complicated, but for now it's just equality\n\t\tif *proposed == ScopeItem::Placeholder || *demanded == ScopeItem::Placeholder {\n\t\t\treturn;\n\t\t}\n\t\tif proposed != demanded {\n\t\t\tctx.add_error(format!(\"{} isn't assignable to {}\", proposed.debug_display(), demanded.debug_display()));\n\t\t}\n\t}\n\n\tpub fn type_check_debug_statement(&self, ctx: &mut Ctx, debug_statement: &DebugStatement) -> Option<()> {\n\t\tself.type_check_term(ctx, &debug_statement.term)?;\n\t\t// TODO this probably actually deserves an unwrap/panic, reduction should always work after type checking\n\t\t// let item = self.reduce_term(ctx, &debug_statement.term)?;\n\t\t// ctx.debug_trace.push(format!(\"{:?}\", item));\n\t\tSome(())\n\t}\n\n\tpub fn type_check_term(&self, ctx: &mut Ctx, term: &Term) -> Option<ScopeItem<'a>> {\n\t\tmatch term {\n\t\t\tTerm::Lone(ident) => {\n\t\t\t\tSome(self.checked_get(ctx, ident)?.clone())\n\t\t\t},\n\t\t\tTerm::Chain(first, chain_items) => {\n\t\t\t\tlet mut current_type = self.checked_get(ctx, first)?.clone();\n\t\t\t\tfor chain_item in chain_items {\n\t\t\t\t\tmatch chain_item {\n\t\t\t\t\t\tChainItem::Access(path) => {\n\t\t\t\t\t\t\tcurrent_type = self.type_check_access_path(ctx, &current_type, path)?;\n\t\t\t\t\t\t},\n\t\t\t\t\t\tChainItem::Call { arguments } => {\n\t\t\t\t\t\t\tlet argument_types = arguments.iter().map(|arg| self.type_check_term(ctx, arg)).collect::<Option<_>>()?;\n\t\t\t\t\t\t\tcurrent_type = self.type_check_call(ctx, &current_type, argument_types)?;\n\t\t\t\t\t\t},\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\tSome(current_type)\n\t\t\t},\n\t\t\tTerm::Match { discriminant, arms } => {\n\t\t\t\tlet discriminant_type = self.type_check_term(ctx, discriminant)?;\n\t\t\t\t// make sure discriminant_type is always assignable to the patterns,\n\t\t\t\t// and that all the arms have the same inferred type, which is difficult because it's more about universal assignability\n\t\t\t\tlet mut arm_types = Vec::new();\n\t\t\t\tfor MatchArm { pattern, statement } in arms {\n\t\t\t\t\tself.type_check_pattern_matches(ctx, pattern, &discriminant_type);\n\t\t\t\t\tlet arm_type = self.type_check_term(ctx, statement);\n\t\t\t\t\tarm_types.push(arm_type);\n\t\t\t\t}\n\n\t\t\t\t// TODO also have to make sure all branches are covered by the arms\n\n\t\t\t\tlet arm_types: Vec<_> = arm_types.into_iter().collect::<Option<_>>()?;\n\t\t\t\t// if there aren't any arm_types then either arms were ill-typed or there aren't any arms,\n\t\t\t\t// which either means\n\t\t\t\t// - the arms don't cover the type and *that* will be an error\n\t\t\t\t// - the type is empty, in which case TODO what should we do here? there won't be any errors, because we won't check for assignability to anything, but we also won't have a type to return\n\t\t\t\t// honestly this entire function will just change in the future, since we'll probably accept some suggested inferred type or something\n\t\t\t\t// which means that matches over empty types will just be allowed to infer to whatever was inferred from the outside of this function\n\t\t\t\tlet (first_arm_type, rest_arm_types) = arm_types.split_first()?;\n\t\t\t\tfor rest_arm_type in rest_arm_types {\n\t\t\t\t\tself.checked_assignable_to(ctx, rest_arm_type, first_arm_type);\n\t\t\t\t}\n\n\t\t\t\tSome(first_arm_type.clone())\n\t\t\t},\n\t\t}\n\t}\n\tpub fn reduce_term(&self, ctx: &mut Ctx, term: &Term) -> Option<ScopeItem<'a>> {\n\t\tmatch term {\n\t\t\tTerm::Lone(ident) => {\n\t\t\t\tSome(self.checked_get(ctx, ident)?.clone())\n\t\t\t},\n\t\t\tTerm::Chain(first, chain_items) => {\n\t\t\t\tlet mut current_item = self.checked_get(ctx, first)?.clone();\n\t\t\t\tfor chain_item in chain_items {\n\t\t\t\t\tmatch chain_item {\n\t\t\t\t\t\tChainItem::Access(path) => {\n\t\t\t\t\t\t\tcurrent_item = self.checked_access_path(ctx, &current_item, path)?;\n\t\t\t\t\t\t},\n\t\t\t\t\t\tChainItem::Call { arguments } => {\n\t\t\t\t\t\t\tlet arguments = arguments.iter().map(|arg| self.reduce_term(ctx, arg)).collect::<Option<_>>()?;\n\t\t\t\t\t\t\tcurrent_item = self.checked_call(ctx, &current_item, arguments)?;\n\t\t\t\t\t\t},\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\tSome(current_item)\n\t\t\t},\n\t\t\tTerm::Match { discriminant, arms } => {\n\t\t\t\tlet discriminant = self.reduce_term(ctx, discriminant)?;\n\t\t\t\tfor MatchArm { pattern, statement } in arms {\n\t\t\t\t\tif self.test_pattern_matches(ctx, pattern, &discriminant) {\n\t\t\t\t\t\treturn self.reduce_term(ctx, statement);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tNone\n\t\t\t},\n\t\t}\n\t}\n\n\tpub fn type_check_pattern_matches(&self, ctx: &mut Ctx, pattern: &Term, discriminant_type: &ScopeItem) {\n\t\t// this pattern matching code will be one of the most complicated parts of the entire language\n\t\t// for now, we're just going to go through the arms and reduce everything and check equality\n\t\tmatch pattern {\n\t\t\tTerm::Lone(s) if s == \"_\" => {},\n\t\t\tpattern => {\n\t\t\t\tif let Some(pattern) = self.type_check_term(ctx, pattern) {\n\t\t\t\t\tif pattern != *discriminant_type {\n\t\t\t\t\t\tctx.add_error(format!(\"{} isn't a match for {}\", pattern.debug_display(), discriminant_type.debug_display()));\n\t\t\t\t\t}\n\n\t\t\t\t}\n\t\t\t},\n\t\t}\n\t}\n\tpub fn test_pattern_matches(&self, ctx: &mut Ctx, pattern: &Term, discriminant: &ScopeItem) -> bool {\n\t\t// this pattern matching code will be one of the most complicated parts of the entire language\n\t\t// for now, we're just going to go through the arms and reduce everything and check equality\n\t\tmatch pattern {\n\t\t\tTerm::Lone(s) if s == \"_\" => true,\n\t\t\tpattern => {\n\t\t\t\tif let Some(pattern) = self.reduce_term(ctx, pattern) {\n\t\t\t\t\tpattern == *discriminant\n\t\t\t\t}\n\t\t\t\telse { false }\n\t\t\t},\n\t\t}\n\t}\n\n\tpub fn type_check_call(&self, ctx: &mut Ctx, item: &ScopeItem<'a>, argument_types: Vec<ScopeItem<'a>>) -> Option<ScopeItem<'a>> {\n\t\tmatch item {\n\t\t\tScopeItem::Type(type_definition) => {\n\t\t\t\t// TODO this isn't accurate if the type is a tuple variant\n\t\t\t\tctx.add_error(format!(\"type {} is a type, not a callable\", type_definition.name));\n\t\t\t\tNone\n\t\t\t},\n\t\t\tScopeItem::Procedure(procedure_definition) => {\n\t\t\t\t// TODO this is one of those situations where incremental compilation will be helpful\n\t\t\t\t// in the future, we won't in any way recheck this procedure definition,\n\t\t\t\t// we'll just query for its already checked information and compare that against the arguments\n\t\t\t\t// that already checked information can potentially be poisoned/placeholder, so we'll just not bother\n\t\t\t\tlet return_type = self.placeholder_get(&procedure_definition.return_type);\n\n\t\t\t\tlet num_arguments = argument_types.len();\n\t\t\t\tlet num_params = procedure_definition.parameters.len();\n\t\t\t\tif num_arguments != num_params {\n\t\t\t\t\tctx.add_error(format!(\"{} takes {} parameters but this call gave {}\", procedure_definition.name, num_params, num_arguments));\n\t\t\t\t}\n\t\t\t\telse {\n\t\t\t\t\tfor (arg_type, (_, param_type)) in std::iter::zip(argument_types.into_iter(), procedure_definition.parameters.iter()) {\n\t\t\t\t\t\tlet param_type = self.placeholder_get(param_type);\n\t\t\t\t\t\tself.checked_assignable_to(ctx, &arg_type, param_type);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tSome(return_type.clone())\n\t\t\t},\n\t\t\t// TODO look up original type definition, and use to build a body of type tuple\n\t\t\tScopeItem::Data(_data) => unimplemented!(),\n\t\t\tScopeItem::Placeholder => None,\n\t\t}\n\t}\n\tpub fn checked_call(&self, ctx: &mut Ctx, item: &ScopeItem<'a>, arguments: Vec<ScopeItem<'a>>) -> Option<ScopeItem<'a>> {\n\t\tmatch item {\n\t\t\tScopeItem::Type(_) => None,\n\t\t\tScopeItem::Procedure(procedure_definition) => {\n\t\t\t\tlet mut call_scope: Scope<'a> = self.clone();\n\t\t\t\tfor (arg, (param_name, _)) in std::iter::zip(arguments.into_iter(), procedure_definition.parameters.iter()) {\n\t\t\t\t\tcall_scope.checked_insert(ctx, &param_name, arg)?;\n\t\t\t\t}\n\t\t\t\tcall_scope.reduce_statements(ctx, &procedure_definition.statements)\n\t\t\t},\n\t\t\t// TODO look up original type definition, and use to build a body of type tuple\n\t\t\tScopeItem::Data(_data) => unimplemented!(),\n\t\t\tScopeItem::Placeholder => None,\n\t\t}\n\t}\n\n\tpub fn type_check_access_path(&self, ctx: &mut Ctx, item: &ScopeItem<'a>, path: &Ident) -> Option<ScopeItem<'a>> {\n\t\tmatch item {\n\t\t\tScopeItem::Type(type_definition) => {\n\t\t\t\t// look through the type_definition to see if it has a constructor with this name\n\t\t\t\tmatch &type_definition.body {\n\t\t\t\t\tTypeBody::Unit => {\n\t\t\t\t\t\tctx.add_error(format!(\"can't access property {path} on unit type {}\", type_definition.name));\n\t\t\t\t\t\tNone\n\t\t\t\t\t},\n\t\t\t\t\tTypeBody::Union{ branches } => {\n\t\t\t\t\t\tbranches.iter().find(|b| b == &path)?;\n\t\t\t\t\t\tSome(ScopeItem::Type(type_definition))\n\t\t\t\t\t},\n\t\t\t\t}\n\t\t\t},\n\t\t\tScopeItem::Procedure(procedure_definition) => {\n\t\t\t\tctx.add_error(format!(\"can't access property {path} on procedure {}\", procedure_definition.name));\n\t\t\t\tNone\n\t\t\t},\n\t\t\tScopeItem::Data(_) => unimplemented!(),\n\t\t\tScopeItem::Placeholder => None,\n\t\t}\n\t}\n\tpub fn checked_access_path(&self, _ctx: &mut Ctx, item: &ScopeItem<'a>, path: &Ident) -> Option<ScopeItem<'a>> {\n\t\tmatch item {\n\t\t\tScopeItem::Type(type_definition) => {\n\t\t\t\tmatch &type_definition.body {\n\t\t\t\t\tTypeBody::Unit => None,\n\t\t\t\t\tTypeBody::Union{ branches } => {\n\t\t\t\t\t\tlet branch = branches.iter().find(|b| b == &path)?;\n\t\t\t\t\t\t// TODO body will get more complex as the nature of a branch gets more complex\n\t\t\t\t\t\tSome(ScopeItem::Data(Data{ type_path: type_definition.name.clone(), name: branch.clone(), body: Body::Unit }))\n\t\t\t\t\t},\n\t\t\t\t}\n\t\t\t},\n\t\t\tScopeItem::Procedure(_) => None,\n\t\t\tScopeItem::Placeholder => None,\n\t\t\tdata => Some(data.clone()),\n\t\t}\n\t}\n\n\tpub fn placeholder_get<'s>(&'s self, ident: &Ident) -> &'s ScopeItem<'a> {\n\t\tself.scope.get(ident).unwrap_or(&ScopeItem::Placeholder)\n\t}\n\n\tpub fn checked_get<'s>(&'s self, ctx: &mut Ctx, ident: &Ident) -> Option<&'s ScopeItem<'a>> {\n\t\tmatch self.scope.get(ident) {\n\t\t\tNone => {\n\t\t\t\tctx.add_error(format!(\"{ident} can't be found\"));\n\t\t\t\tNone\n\t\t\t},\n\t\t\titem => item,\n\t\t}\n\t}\n\n\t// pub fn checked_exists(arg: Type) -> RetType {\n\t// \tunimplemented!()\n\t// }\n}\n\n#[derive(Debug)]\npub struct Ctx {\n\tpub errors: Vec<String>,\n\tpub debug_trace: Vec<String>,\n}\n\nimpl Ctx {\n\tpub fn add_error(&mut self, error: String) {\n\t\tself.errors.push(error);\n\t}\n\tpub fn add_debug(&mut self, debug: String) {\n\t\tself.debug_trace.push(debug);\n\t}\n}\n\n\n#[cfg(test)]\nmod tests {\n\tuse super::*;\n\tuse crate::parser;\n\n\t#[test]\n\tfn basic_type_errors() {\n\t\tlet i = r#\"\n\t\t\ttype Day;\n\t\t\t\t| Monday\n\t\t\t\t| Tuesday\n\t\t\t\t| Wednesday\n\t\t\t\t| Thursday\n\t\t\t\t| Friday\n\t\t\t\t| Saturday\n\t\t\t\t| Sunday\n\n\t\t\ttype Bool;\n\t\t\t\t| True\n\t\t\t\t| False\n\t\t\"#;\n\t\tlet (remaining, module_items) = parser::parse_file_with_indentation(3, i).unwrap();\n\t\tassert_eq!(remaining, \"\");\n\t\tlet (mut scope, mut ctx) = Scope::new();\n\t\tscope.type_check_module(&mut ctx, &module_items);\n\t\tassert!(ctx.errors.is_empty());\n\n\t\tfor (input, expected_errors) in [\n\t\t\t(r#\"\n\t\t\t\tproc same_day(d: Nope): Day;\n\t\t\t\t\td\n\t\t\t\"#, vec![\"Nope can't be found\"]),\n\t\t\t(r#\"\n\t\t\t\tproc same_day(d: Day): Nope;\n\t\t\t\t\td\n\t\t\t\"#, vec![\"Nope can't be found\"]),\n\t\t\t(r#\"\n\t\t\t\tproc same_day(a: Day): Day;\n\t\t\t\t\td\n\t\t\t\"#, vec![\"d can't be found\"]),\n\t\t\t(r#\"\n\t\t\t\tproc same_day(d: Day): Day;\n\t\t\t\t\ta\n\t\t\t\"#, vec![\"a can't be found\"]),\n\t\t\t(r#\"\n\t\t\t\tproc same_day(): Day;\n\t\t\t\t\td\n\t\t\t\"#, vec![\"d can't be found\"]),\n\t\t\t(r#\"\n\t\t\t\tproc same_day(d: Day, d: Day): Day;\n\t\t\t\t\td\n\t\t\t\"#, vec![\"name d has already been used\"]),\n\t\t\t(r#\"\n\t\t\t\tproc same_day(d: Day): Day;\n\t\t\t\t\td\n\t\t\t\tproc same_day(d: Day): Day;\n\t\t\t\t\td\n\t\t\t\"#, vec![\"name same_day has already been used\"]),\n\t\t\t(r#\"\n\t\t\t\tproc same_day(d: Day, b: Bool): Day;\n\t\t\t\t\tb\n\t\t\t\"#, vec![\"Bool isn't assignable to Day\"]),\n\t\t\t(r#\"\n\t\t\t\tproc same_day(d: Day, b: Bool): Bool;\n\t\t\t\t\td\n\t\t\t\"#, vec![\"Day isn't assignable to Bool\"]),\n\t\t\t(r#\"\n\t\t\t\tproc same_day(d: Day, b: Bool): Bool;\n\t\t\t\t\tmatch d;\n\t\t\t\t\t\tDay.Monday => Day.Monday\n\t\t\t\t\t\t_ => Day.Monday\n\t\t\t\"#, vec![\"Day isn't assignable to Bool\"]),\n\t\t\t(r#\"\n\t\t\t\tproc same_day(d: Day, b: Bool): Day;\n\t\t\t\t\tmatch d;\n\t\t\t\t\t\tBool.True => Day.Monday\n\t\t\t\t\t\t_ => Day.Monday\n\t\t\t\"#, vec![\"Bool isn't a match for Day\"]),\n\t\t\t(r#\"\n\t\t\t\tproc same_day(d: Day, b: Bool): Day;\n\t\t\t\t\tmatch b;\n\t\t\t\t\t\tDay.Monday => Day.Monday\n\t\t\t\t\t\t_ => Day.Tuesday\n\t\t\t\"#, vec![\"Day isn't a match for Bool\"]),\n\t\t\t(r#\"\n\t\t\t\tproc bool_negate(b: Bool): Day;\n\t\t\t\t\tmatch b;\n\t\t\t\t\t\tBool.True => Bool.False\n\t\t\t\t\t\tBool.False => Bool.True\n\t\t\t\"#, vec![\"Bool isn't assignable to Day\"]),\n\t\t] {\n\t\t\tlet (remaining, module_items) = parser::parse_file_with_indentation(4, input).unwrap();\n\t\t\tassert_eq!(remaining, \"\");\n\n\t\t\tlet mut loop_scope = scope.clone();\n\t\t\tloop_scope.type_check_module(&mut ctx, &module_items);\n\t\t\tassert_eq!(ctx.errors, expected_errors);\n\t\t\tctx.errors.clear();\n\t\t}\n\t}\n\n\t#[test]\n\tfn foundations_day_of_week() {\n\t\tlet i = r#\"\n\t\t\ttype Day;\n\t\t\t\t| Monday\n\t\t\t\t| Tuesday\n\t\t\t\t| Wednesday\n\t\t\t\t| Thursday\n\t\t\t\t| Friday\n\t\t\t\t| Saturday\n\t\t\t\t| Sunday\n\n\t\t\tproc next_weekday(d: Day): Day;\n\t\t\t\tmatch d;\n\t\t\t\t\tDay.Monday => Day.Tuesday\n\t\t\t\t\tDay.Tuesday => Day.Wednesday\n\t\t\t\t\tDay.Wednesday => Day.Thursday\n\t\t\t\t\tDay.Thursday => Day.Friday\n\t\t\t\t\t_ => Day.Monday\n\n\t\t\tproc same_day(d: Day): Day;\n\t\t\t\td\n\t\t\"#;\n\t\tlet (remaining, module_items) = parser::parse_file_with_indentation(3, i).unwrap();\n\t\tassert_eq!(remaining, \"\");\n\n\t\tlet (mut scope, mut ctx) = Scope::new();\n\t\tscope.type_check_module(&mut ctx, &module_items);\n\t\tassert!(ctx.errors.is_empty());\n\t\tassert!(ctx.debug_trace.is_empty());\n\n\t\tlet term = parser::parse_expression(0, \"Day\").unwrap().1;\n\t\tassert_eq!(scope.reduce_term(&mut ctx, &term).unwrap(), *scope.scope.get(&\"Day\".to_string()).unwrap());\n\t\tassert!(ctx.errors.is_empty());\n\n\t\tfn make_day(day: &str) -> ScopeItem {\n\t\t\tScopeItem::Data(Data{ name: day.into(), type_path: \"Day\".into(), body: Body::Unit })\n\t\t}\n\n\t\tfor (input, expected) in [\n\t\t\t(\"Day.Monday\", \"Monday\"),\n\t\t\t(\"same_day(Day.Monday)\", \"Monday\"),\n\t\t\t(\"next_weekday(Day.Friday)\", \"Monday\"),\n\t\t\t(\"next_weekday(next_weekday(Day.Friday))\", \"Tuesday\"),\n\t\t] {\n\t\t\tlet term = parser::parse_expression(0, input).unwrap().1;\n\t\t\tassert_eq!(scope.reduce_term(&mut ctx, &term).unwrap(), make_day(expected));\n\t\t\tassert!(ctx.errors.is_empty());\n\t\t}\n\n\t\tlet i = r#\"\n\t\t\tprop Eq(@T: type): [T, T];\n\t\t\t\t(t: T): [t, t]\n\n\t\t\tthm example_next_weekday: Eq[next_weekday(Day.Saturday), Day.Monday];\n\t\t\t\tEq(Day.Monday)\n\n\t\t\tthm example_next_weekday_cleaner: next_weekday(Day.Saturday) :Eq Day.Monday; _\n\t\t\tthm example_next_weekday_sugar: next_weekday(Day.Saturday) == Day.Monday; _\n\t\t\"#;\n\t\tlet (remaining, proof_items) = parser::parse_file_with_indentation(3, i).unwrap();\n\t\tassert_eq!(remaining, \"\");\n\n\t\tscope.type_check_module(&mut ctx, &proof_items);\n\t\tassert!(ctx.errors.is_empty());\n\t\tassert!(ctx.debug_trace.is_empty());\n\n\n\t\t// let i = r#\"\n\t\t// \tdebug next_weekday(Day.Friday)\n\t\t// \tdebug next_weekday(next_weekday(Day.Saturday))\n\t\t// \"#;\n\t\t// let (remaining, debug_items) = parser::parse_file_with_indentation(3, i).unwrap();\n\t\t// assert_eq!(remaining, \"\");\n\n\t\t// scope.type_check_module(&mut ctx, &debug_items);\n\t\t// assert!(ctx.errors.is_empty());\n\t\t// assert_eq!(ctx.debug_trace, vec![\"Day.Monday\", \"Day.Tuesday\"]);\n\t}\n}\n"
  },
  {
    "path": "src/lib.rs",
    "content": "mod parser;\nmod checker;\nmod ast;\n"
  },
  {
    "path": "src/main.rs",
    "content": "// use magmide::ast;\n// use magmide::parser;\n// use magmide::Database;\n// use magmide::checker;\n\nfn main() {\n\t// let path = PathBuf::from(r\"/\");\n\n\t// // starting from the original source file, we walk the imports (which for now don't exist) and add source files as we go (which probably requires intelligent separation to make sure we can do import tracking without an incremental database present)\n\t// // or not? things like type checking are incredibly query-oriented, so it probably doesn't make sense\n\n\t// let db = magmide::Database::default();\n\t// let source_file = magmide::SourceFile::new(&db, \"bad\".into(), 0, path);\n\t// magmide::tracked_parse_file(&db, source_file);\n\t// let diagnostics = magmide::tracked_parse_file::accumulated::<magmide::Diagnostic>(&db, source_file);\n\t// eprintln!(\"{diagnostics:?}\");\n}\n"
  },
  {
    "path": "src/old.md",
    "content": "```rust\n#[derive(Debug, Eq, PartialEq, Clone)]\npub struct ModuleItemBlock {\n  pub line: usize,\n  pub body: String,\n  pub kind: ModuleItemBlockKind,\n}\n\n#[derive(Debug, Eq, PartialEq, Clone)]\npub enum ModuleItemBlockKind {\n  Procedure{ name: String },\n  Type{ name: String },\n  Debug,\n  Error,\n}\n\npub fn parse_module_item_blocks(indentation: usize, i: &str) -> Vec<ModuleItemBlock> {\n  let mut module_item_blocks = Vec::new();\n  let mut current_block_body = Vec::new();\n  let mut current_block_line = 0;\n  let mut current_block_kind = None;\n\n  fn close_block(\n    module_item_blocks: &mut Vec<ModuleItemBlock>,\n    current_block_body: &mut Vec<&str>,\n    line: usize,\n    kind: ModuleItemBlockKind,\n  ) {\n    let body = current_block_body.join(\"\\n\");\n    current_block_body.clear();\n    module_item_blocks.push(ModuleItemBlock{ line, body, kind });\n  }\n\n  for (index, body_line) in i.lines().enumerate() {\n    let line = index + 1;\n\n    if get_tab_count(body_line) != indentation || body_line.len() == 0 {\n      current_block_body.push(body_line);\n      continue;\n    }\n\n    let trimmed_body_line = body_line.trim_start();\n    if trimmed_body_line.starts_with(';') {\n      current_block_body.push(body_line);\n      if current_block_kind.is_none() {\n        current_block_kind = Some(ModuleItemBlockKind::Error);\n      }\n      continue;\n    }\n\n    use ModuleItemBlockKind::*;\n\n    let kind = alt((\n      map(preceded(tag(\"proc \"), parse_ident), |name| Procedure{ name: name.to_string() }),\n      map(preceded(tag(\"type \"), parse_ident), |name| Type{ name: name.to_string() }),\n      value(Debug, tag(\"debug \")),\n    ))(trimmed_body_line)\n      .map(|(_, kind)| kind)\n      .unwrap_or(ModuleItemBlockKind::Error);\n\n    let previous_kind = std::mem::replace(&mut current_block_kind, Some(kind));\n    if let Some(previous_kind) = previous_kind {\n      close_block(&mut module_item_blocks, &mut current_block_body, current_block_line, previous_kind);\n      current_block_line = line;\n    }\n    current_block_body.push(body_line);\n  }\n  close_block(&mut module_item_blocks, &mut current_block_body, current_block_line, current_block_kind.unwrap_or(ModuleItemBlockKind::Error));\n\n  module_item_blocks\n}\n\n\n\nfn make_block(line: usize, kind: ModuleItemBlockKind, body: &str) -> ModuleItemBlock {\n  ModuleItemBlock{ line, body: body.into(), kind }\n}\n\n#[test]\nfn test_parse_module_item_blocks() {\n  let i = r#\"\n    proc hello\n      what\n      here\n    ;\n      actual body\n      is::\n        arbitary\n          nested\n        things\n      yo\n\n        sup\n\n    type hey; sup\n    type yoyo;\n      | whatev\n      | thing\n        and stuff\n          and whatever\n\n    bad\n    alsobad\n      | thing\n      | hmm\n      sdfjdk\n        dkfjdk\n\n    debug hey\n    debug sup\n      big\n      things\n        poppin\n      and such\n\n    proc same_day(d: Day): Day; asdf\n\n  \"#;\n\n  assert_eq!(parse_module_item_blocks(3, i), [\n    make_block(0, ModuleItemBlockKind::Procedure{ name: \"hello\".into() }, \"\\n\\t\\t\\tproc hello\\n\\t\\t\\t\\twhat\\n\\t\\t\\t\\there\\n\\t\\t\\t;\\n\\t\\t\\t\\tactual body\\n\\t\\t\\t\\tis::\\n\\t\\t\\t\\t\\tarbitary\\n\\t\\t\\t\\t\\t\\tnested\\n\\t\\t\\t\\t\\tthings\\n\\t\\t\\t\\tyo\\n\\n\\t\\t\\t\\t\\tsup\\n\"),\n    make_block(15, ModuleItemBlockKind::Type{ name: \"hey\".into() }, \"\\t\\t\\ttype hey; sup\"),\n    make_block(16, ModuleItemBlockKind::Type{ name: \"yoyo\".into() }, \"\\t\\t\\ttype yoyo;\\n\\t\\t\\t\\t| whatev\\n\\t\\t\\t\\t| thing\\n\\t\\t\\t\\t\\tand stuff\\n\\t\\t\\t\\t\\t\\tand whatever\\n\"),\n    make_block(22, ModuleItemBlockKind::Error, \"\\t\\t\\tbad\"),\n    make_block(23, ModuleItemBlockKind::Error, \"\\t\\t\\talsobad\\n\\t\\t\\t\\t| thing\\n\\t\\t\\t\\t| hmm\\n\\t\\t\\t\\tsdfjdk\\n\\t\\t\\t\\t\\tdkfjdk\\n\"),\n    make_block(29, ModuleItemBlockKind::Debug, \"\\t\\t\\tdebug hey\"),\n    make_block(30, ModuleItemBlockKind::Debug, \"\\t\\t\\tdebug sup\\n\\t\\t\\t\\tbig\\n\\t\\t\\t\\tthings\\n\\t\\t\\t\\t\\tpoppin\\n\\t\\t\\t\\tand such\\n\"),\n    make_block(36, ModuleItemBlockKind::Procedure{ name: \"same_day\".into() }, \"\\t\\t\\tproc same_day(d: Day): Day; asdf\\n\\n\\t\\t\"),\n  ]);\n}\n```\n"
  },
  {
    "path": "src/parser.rs",
    "content": "// https://matklad.github.io/2023/05/21/resilient-ll-parsing-tutorial.html\n// https://github.com/rust-analyzer/rowan\n// https://github.com/salsa-rs/salsa\n\n// https://github.com/rust-bakery/nom\n// https://tfpk.github.io/nominomicon/chapter_1.html\n// https://crates.io/crates/nom-peg\n\nuse nom::{\n\tbytes::complete::{tag, take_while, take_while1},\n\tcharacter::{complete::{tab, newline, char as c}},\n\tbranch::alt,\n\tcombinator::{value, opt, map, all_consuming},\n\tmulti::{count, many0, many1, separated_list0, separated_list1},\n\tsequence::{preceded, delimited, separated_pair, terminated, tuple},\n\t// Finish,\n\tIResult,\n};\nuse crate::ast::*;\n\npub type DiscardingResult<T> = Result<T, nom::Err<nom::error::Error<T>>>;\n\nfn is_underscore(chr: char) -> bool {\n\tchr == '_'\n}\nfn is_ident(chr: char) -> bool {\n\tchr.is_ascii_alphanumeric() || is_underscore(chr)\n}\nfn is_start_ident(chr: char) -> bool {\n\tchr.is_ascii_alphabetic() || is_underscore(chr)\n}\n\n\npub fn parse_ident(i: &str) -> IResult<&str, String> {\n\tlet (i, first) = take_while1(is_start_ident)(i)?;\n\tlet (i, rest) = take_while(is_ident)(i)?;\n\tOk((i, format!(\"{}{}\", first, rest)))\n}\n\npub fn parse_branch(_: usize, i: &str) -> IResult<&str, String> {\n\tlet (i, b) = preceded(tag(\"| \"), parse_ident)(i)?;\n\tOk((i, b.into()))\n}\n\nfn parse_indents(indentation: usize, i: &str) -> IResult<&str, Vec<char>> {\n\tcount(tab, indentation)(i)\n}\nfn indents(i: &str, indentation: usize) -> DiscardingResult<&str> {\n\tOk(parse_indents(indentation, i)?.0)\n}\nfn parse_newlines(i: &str) -> IResult<&str, Vec<char>> {\n\tmany0(newline)(i)\n}\nfn newlines(i: &str) -> DiscardingResult<&str> {\n\tOk(parse_newlines(i)?.0)\n}\n\nfn indented_line<T>(indentation: usize, i: &str, line_parser: fn(usize, &str) -> IResult<&str, T>) -> IResult<&str, T> {\n\tlet i = indents(i, indentation)?;\n\tline_parser(indentation, i)\n}\n\nfn indented_block<T>(indentation: usize, i: &str, line_parser: fn(usize, &str) -> IResult<&str, T>) -> IResult<&str, Vec<T>> {\n\tlet indentation = indentation + 1;\n\tlet i = newlines(i)?;\n\tlet (i, items) = separated_list1(many1(newline), |i| indented_line(indentation, i, line_parser))(i)?;\n\n\tOk((i, items))\n}\n\npub fn parse_file(i: &str) -> IResult<&str, Vec<ModuleItem>> {\n\tparse_file_with_indentation(0, i)\n}\n\npub fn parse_file_with_indentation(indentation: usize, i: &str) -> IResult<&str, Vec<ModuleItem>> {\n\tall_consuming(\n\t\tterminated(\n\t\t\t|i| parse_module_items_with_indentation(indentation, i),\n\t\t\ttuple((parse_newlines, |i| parse_indents(usize::max(indentation - 1, 0), i), parse_newlines)),\n\t\t)\n\t)(i)\n}\n\npub fn parse_module_items(i: &str) -> IResult<&str, Vec<ModuleItem>> {\n\tparse_module_items_with_indentation(0, i)\n}\n\npub fn parse_module_items_with_indentation(indentation: usize, i: &str) -> IResult<&str, Vec<ModuleItem>> {\n\tseparated_list1(many1(newline), |i| parse_module_item(indentation, i))(i)\n}\n\npub fn parse_module_item(indentation: usize, i: &str) -> IResult<&str, ModuleItem> {\n\tlet i = newlines(i)?;\n\tlet i = indents(i, indentation)?;\n\n\talt((\n\t\tmap(|i| parse_type_definition(indentation, i), ModuleItem::Type),\n\t\tmap(|i| parse_procedure_definition(indentation, i), ModuleItem::Procedure),\n\t\tmap(|i| parse_debug_statement(indentation, i), ModuleItem::Debug),\n\t))(i)\n}\n\npub fn parse_type_definition(indentation: usize, i: &str) -> IResult<&str, TypeDefinition> {\n\tlet (i, name) = preceded(tag(\"type \"), parse_ident)(i)?;\n\tlet here_branch = |i| indented_block(indentation, i, parse_branch);\n\tlet (i, branches) = opt(preceded(c(';'), here_branch))(i)?;\n\tlet body = match branches {\n\t\tSome(branches) => TypeBody::Union{ branches },\n\t\tNone => TypeBody::Unit,\n\t};\n\n\tOk((i, TypeDefinition{ name: name.into(), body }))\n}\n\npub fn parse_procedure_definition(indentation: usize, i: &str) -> IResult<&str, ProcedureDefinition> {\n\tlet (i, name) = preceded(tag(\"proc \"), parse_ident)(i)?;\n\tlet (i, parameters) = delimited(c('('), parse_parameters, c(')'))(i)?;\n\tlet (i, return_type) = preceded(tag(\": \"), parse_ident)(i)?;\n\tlet here_statement = |i| indented_block(indentation, i, parse_statement);\n\tlet (i, statements) = preceded(c(';'), here_statement)(i)?;\n\n\tOk((i, ProcedureDefinition{ name: name.into(), parameters, return_type, statements }))\n}\n\npub fn parse_debug_statement(indentation: usize, i: &str) -> IResult<&str, DebugStatement> {\n\tlet (i, term) = preceded(tag(\"debug \"), |i| parse_term(indentation, i))(i)?;\n\tOk((i, DebugStatement{ term }))\n}\n\n// fn parse_parameters(indentation: usize) -> impl Fn(&str) -> IResult<&str, ModuleItem> {\npub fn parse_parameters(i: &str) -> IResult<&str, Vec<(String, String)>> {\n\tseparated_list0(tag(\", \"), parse_parameter)(i)\n}\npub fn parse_parameter(i: &str) -> IResult<&str, (String, String)> {\n\tseparated_pair(parse_ident, tag(\": \"), parse_ident)(i)\n}\n\npub fn parse_statement(indentation: usize, i: &str) -> IResult<&str, Term> {\n\tparse_term(indentation, i)\n}\n\npub fn parse_term(indentation: usize, i: &str) -> IResult<&str, Term> {\n\talt((\n\t\t|i| parse_match(indentation, i),\n\t\t|i| parse_expression(indentation, i),\n\t))(i)\n}\n\npub fn parse_match(indentation: usize, i: &str) -> IResult<&str, Term> {\n\t// TODO need to figure out how to use the full parse_term for the discriminant\n\tlet (i, discriminant) = delimited(tag(\"match \"), |i| parse_expression(indentation, i), c(';'))(i)?;\n\tlet (i, arms) = indented_block(indentation, i, parse_match_arm)?;\n\n\tOk((i, Term::Match{ discriminant: discriminant.into(), arms }))\n}\n\npub fn parse_match_arm(indentation: usize, i: &str) -> IResult<&str, MatchArm> {\n\tlet here_term = |i| parse_term(indentation, i);\n\tlet (i, (pattern, statement)) = separated_pair(here_term, tag(\" => \"), here_term)(i)?;\n\tOk((i, MatchArm{ pattern, statement }))\n}\n\n// pub fn parse_pattern(i: &str) -> IResult<&str, Pattern> {\npub fn parse_pattern(i: &str) -> IResult<&str, String> {\n\tparse_ident(i)\n}\n\npub fn parse_expression(indentation: usize, i: &str) -> IResult<&str, Term> {\n\tlet (i, first) = parse_ident(i)?;\n\tlet (i, rest) = many0(|i| parse_chain_item(indentation, i))(i)?;\n\tlet term =\n\t\tif rest.len() == 0 { Term::Lone(first) }\n\t\telse { Term::Chain(first, rest) };\n\tOk((i, term))\n}\n\npub fn parse_chain_item(indentation: usize, i: &str) -> IResult<&str, ChainItem> {\n\talt((\n\t\tmap(preceded(c('.'), parse_ident), ChainItem::Access),\n\t\tmap(\n\t\t\tdelimited(c('('), separated_list0(tag(\", \"), |i| parse_expression(indentation, i)), c(')')),\n\t\t\t|arguments| ChainItem::Call{ arguments },\n\t\t),\n\t))(i)\n}\n\n\nfn get_tab_count(i: &str) -> usize {\n\tlet mut tab_count = 0;\n\tlet mut line_chars = i.chars();\n\twhile line_chars.next() == Some('\\t') {\n\t\ttab_count = tab_count + 1;\n\t}\n\ttab_count\n}\n\n\n#[cfg(test)]\nmod tests {\n\tuse super::*;\n\n\tfn make_lone(s: &str) -> Term {\n\t\tTerm::Lone(s.into())\n\t}\n\tfn make_day(day: &str) -> Term {\n\t\tTerm::Chain(\"Day\".into(), vec![ChainItem::Access(day.into())])\n\t}\n\n\t#[test]\n\tfn test_parse_expression() {\n\t\tlet i = \"Day.Monday\";\n\t\tassert_eq!(\n\t\t\tparse_expression(0, i).unwrap().1,\n\t\t\tmake_day(\"Monday\"),\n\t\t);\n\n\t\tlet i = \"fn(next, hello)\";\n\t\tassert_eq!(\n\t\t\tparse_expression(0, i).unwrap().1,\n\t\t\tTerm::Chain(\"fn\".into(), vec![ChainItem::Call{ arguments: vec![make_lone(\"next\"), make_lone(\"hello\")] }]),\n\t\t);\n\t}\n\n\t#[test]\n\tfn test_parse_match() {\n\t\tlet i = r#\"\n\t\t\tmatch d;\n\t\t\t\tDay.Monday => Day.Tuesday\n\t\t\t\tDay.Tuesday => Day.Wednesday\n\t\t\"#.trim();\n\t\tassert_eq!(\n\t\t\tparse_match(3, i).unwrap().1,\n\t\t\tTerm::Match{ discriminant: Box::new(Term::Lone(\"d\".into())), arms: vec![\n\t\t\t\tMatchArm{ pattern: make_day(\"Monday\"), statement: make_day(\"Tuesday\") },\n\t\t\t\tMatchArm{ pattern: make_day(\"Tuesday\"), statement: make_day(\"Wednesday\") },\n\t\t\t] },\n\t\t);\n\t}\n\n\t#[test]\n\tfn test_parse_type_definition() {\n\t\tlet i = r#\"\n\t\t\ttype Day;\n\t\t\t\t| Monday\n\t\t\t\t| Tuesday\n\t\t\"#.trim();\n\t\tassert_eq!(\n\t\t\tparse_type_definition(3, i).unwrap().1,\n\t\t\tTypeDefinition{\n\t\t\t\tname: \"Day\".into(),\n\t\t\t\tbody: TypeBody::Union{ branches: vec![\"Monday\".into(), \"Tuesday\".into()] },\n\t\t\t},\n\t\t);\n\t}\n\n\t#[test]\n\tfn test_parse_procedure() {\n\t\tlet i = r#\"\n\t\t\tproc same_day(d: Day): Day;\n\t\t\t\td\n\t\t\"#.trim();\n\t\tassert_eq!(\n\t\t\tparse_procedure_definition(3, i).unwrap().1,\n\t\t\tProcedureDefinition{\n\t\t\t\tname: \"same_day\".into(), parameters: vec![(\"d\".into(), \"Day\".into())],\n\t\t\t\treturn_type: \"Day\".into(), statements: vec![Term::Lone(\"d\".into())],\n\t\t\t},\n\t\t);\n\n\t\tlet i = r#\"\n\t\t\tproc same_day(): Day;\n\t\t\t\td\n\t\t\"#.trim();\n\t\tassert_eq!(\n\t\t\tparse_procedure_definition(3, i).unwrap().1,\n\t\t\tProcedureDefinition{\n\t\t\t\tname: \"same_day\".into(), parameters: vec![],\n\t\t\t\treturn_type: \"Day\".into(), statements: vec![Term::Lone(\"d\".into())],\n\t\t\t},\n\t\t);\n\t}\n\n\t#[test]\n\tfn test_parse_debug_statement() {\n\t\tlet i = r#\"\n\t\t\tdebug next_weekday(next_weekday(d))\n\t\t\"#.trim();\n\t\tassert_eq!(\n\t\t\tparse_debug_statement(3, i).unwrap().1,\n\t\t\tDebugStatement{\n\t\t\t\tterm: Term::Chain(\"next_weekday\".into(), vec![ChainItem::Call{ arguments: vec![\n\t\t\t\t\tTerm::Chain(\"next_weekday\".into(), vec![ChainItem::Call{ arguments: vec![make_lone(\"d\")] }]),\n\t\t\t\t]}]),\n\t\t\t},\n\t\t);\n\t}\n}\n\n// fn deindent_to_base(s: &str) -> String {\n// \tlet s = s.strip_prefix(\"\\n\").unwrap_or(s);\n// \tlet (s, base_tabs) = match is_a(\"\\t\")(s) {\n// \t\tOk((s, tabs)) => (s, tabs.len()),\n// \t\tErr(_) => { return s.into() },\n// \t};\n\n\n// \tlet mut final_s = String::new();\n// \tfor line in s.split(\"\\n\") {\n// \t\t// take base_tabs number of tabs, always expecting\n// \t}\n\n// \ts.into()\n// }\n"
  },
  {
    "path": "theory/_CoqProject",
    "content": "-R . theory\n-Q /home/blaine/lab/cpdtlib Cpdt\n"
  },
  {
    "path": "theory/list_assertions.v",
    "content": "Set Implicit Arguments. Set Asymmetric Patterns.\nRequire Import List.\nImport ListNotations.\nFrom stdpp Require Import base options list tactics.\n\n(* All just collects unordered independent assertions. *)\nSection All.\n\tInductive All: list Prop -> Prop :=\n\t\t| All_empty: All []\n\t\t| All_push: forall (P: Prop) (l: list Prop), P -> All l -> All (P :: l)\n\t.\n\tHint Constructors All: core.\n\n\tTheorem All_pop (P: Prop) (l: list Prop): All (P :: l) -> P.\n\tProof. inversion 1; auto. Qed.\n\n\tTheorem All_And (P: Prop) (l: list Prop):\n\t\tP /\\ All l <-> All (P :: l).\n\tProof.\n\t\tsplit.\n\t\t- intros [??]; auto.\n\t\t- inversion 1; auto.\n\tQed.\n\n\tTheorem All_flatten_head (A B: Prop) (l: list Prop):\n\t\tAll ((A /\\ B) :: l) <-> All (A :: B :: l).\n\tProof.\n\t\tsplit.\n\t\t- inversion 1; naive_solver.\n\t\t- inversion 1 as [|??? H']; inversion H'; auto.\n\tQed.\n\n\tTheorem All_permutation (A B: list Prop):\n\t\tPermutation A B -> All A -> All B.\n\tProof.\n\t\tintros Hpermutation HA; induction Hpermutation as []; try inversion HA; auto.\n\t\tinversion HA as [|??? Hl]; inversion Hl; auto.\n\tQed.\n\n\tGlobal Instance Proper_Permutation_All: Proper (Permutation ==> flip impl) All.\n\tProof.\n\t\tintros ??[]?; eauto using All_permutation, Permutation_sym.\n\t\tinversion H as [| ??? Hl']; inversion Hl'; auto.\n\tQed.\n\n\tTheorem All_permutation_cleaner (A B: list Prop):\n\t\tPermutation A B -> All A -> All B.\n\tProof. intros ?H; rewrite <-H; auto. Qed.\n\n\tTheorem All_In_middle (P: Prop) (before after: list Prop):\n\t\tAll (before ++ [P] ++ after) -> P.\n\tProof.\n\t\tintros H;\n\t\tassert (Hpermutation: Permutation (before ++ [P] ++ after) ([P] ++ (before ++ after)))\n\t\t\tby eauto using Permutation_app_swap_app;\n\t\trewrite Hpermutation in H;\n\t\tassert (Hswap: ([P] ++ before ++ after) = (P :: (before ++ after))) by eauto;\n\t\trewrite Hswap in H;\n\t\teapply All_pop; eauto.\n\tQed.\n\n\tTheorem All_In (P: Prop) (l: list Prop):\n\t\tIn P l -> All l -> P.\n\tProof.\n\t\tintros Hin Hall;\n\t\tapply in_split in Hin as [before [after]]; subst;\n\t\trewrite cons_middle in Hall;\n\t\teapply All_In_middle; eauto.\n\tQed.\nEnd All.\n#[export] Hint Constructors All: core.\n\n(* Compatible links assertions that must have some notion of compatibility between all items. *)\nSection Compatible.\n\tContext {T: Type}.\n\tHypothesis ItemsCompatible: T -> T -> Prop.\n\tHypothesis ItemsCompatible_symmetric: forall t1 t2,\n\t\tItemsCompatible t1 t2 -> ItemsCompatible t2 t1.\n\n\tInductive Compatible: list T -> Prop :=\n\t\t| Compatible_empty: Compatible []\n\t\t| Compatible_push: forall t l,\n\t\t\tForall (ItemsCompatible t) l ->\n\t\t\tCompatible (t :: l)\n\t.\n\tHint Constructors Compatible: core.\n\n\tTheorem Compatible_symmetric t1 t2: Compatible [t1; t2] -> Compatible [t2; t1].\n\tProof using ItemsCompatible_symmetric.\n\t\tintros H; inversion H as [| ?? HF];\n\t\tinversion HF; eauto.\n\tQed.\n\n\t(*Theorem Compatible_swap t1 t2 l:\n\t\tCompatible (t1 :: t2 :: l) -> Compatible (t2 :: t1 :: l).\n\tProof.\nintros H.\ninversion H as [| ???].\nsubst.\n\n-\nsubst.\nconstructor.\n\tQed.\n\n\tGlobal Instance Proper_Permutation_Compatible: Proper (Permutation ==> flip impl) Compatible.\n\tProof.\nintros ??[]?; try auto.\n-\ninversion H0; subst; simpl.\ninduction H as []; try auto; subst; simpl.\n\n\n\n\n\n\t\tintros ??[]?; eauto using All_permutation, Permutation_sym.\n\t\tinversion H as [| ??? Hl']; inversion Hl'; auto.\n\tQed.\n\n\tTheorem Compatible_Permutation l1 l2: Permutation l1 l2 -> Compatible l1 -> Compatible l2.\n\tProof.\nintros HP HC.\ninduction HC as [|? ? ? ?].\n- apply Permutation_nil in HP; naive_solver.\n-\ninversion H.\n+ subst. apply Permutation_singleton_l in HP. naive_solver.\n+\nsubst.\n\nrewrite <-HP.\n\ninduction HP as []; try inversion HC; auto.\n\n\nintros Hpermutation HA; induction Hpermutation as []; try inversion HA; auto.\ninversion HA as [|??? Hl]; inversion Hl; auto.\n\tQed.*)\n\nEnd Compatible.\n#[export] Hint Constructors Compatible: core.\n\n\nSection test_Compatible_neq.\n\tTheorem Compatible_neq (a b c: nat):\n\t\ta <> b -> b <> c -> a <> c -> Compatible (fun a b => a <> b) [a; b; c].\n\tProof. auto. Qed.\nEnd test_Compatible_neq.\n\n\n\n(* Chain links assertions in a strict order, where each assertion is implied by the previous. *)\nSection Chain.\n\tContext {T: Type}.\n\tHypothesis Link: T -> T -> Prop.\n\n\tInductive Chain: list T -> Prop :=\n\t\t| Chain_empty: forall start, Chain [start]\n\t\t| Chain_link: forall past cur next,\n\t\t\tLink cur next\n\t\t\t-> Chain (cur :: past)\n\t\t\t-> Chain (next :: (cur :: past))\n\t.\n\tHint Constructors Chain: core.\n\n\tInductive Chained: T -> T -> Prop :=\n\t\t| Chained_start_finish: forall start mid finish,\n\t\t\tChain (finish :: (mid ++ [start]))\n\t\t\t-> Chained start finish\n\t.\n\n\tTheorem Chain_to_Chained l:\n\t\tChain l\n\t\t-> Chained start finish.\n\tProof.\n\n\tQed.\n\n\nEnd Chain.\n"
  },
  {
    "path": "theory/main.v",
    "content": "Add LoadPath \"/home/blaine/lab/cpdtlib\" as Cpdt.\nSet Implicit Arguments. Set Asymmetric Patterns.\n\nFrom stdpp Require Import base options strings stringmap.\nImport ListNotations.\n(*Require Import theory.utils.*)\nRequire Import String.\n\nNotation MachineState := (stringmap nat).\n\n(*Record MachineState: Type := machine_state {\n\tglobal_identifiers: (stringmap nat);\n\tlocal_identifiers: (stringmap nat);\n}.*)\n\nInductive Operand: Type :=\n\t| Literal (n: nat)\n\t| Identifier (s: string)\n.\n\nDefinition evaluate_operand (operand: Operand) (state: MachineState): option nat :=\n\tmatch operand with\n\t| Literal n => Some n\n\t(*| Identifier s => lookup s state.(local_identifiers)*)\n\t| Identifier s => lookup s state\n\tend\n.\n(*gen_heap_valid*)\n\nInductive Instruction: Type :=\n\t| Inst_Add (dest: string) (op1 op2: Operand)\n.\n\nDefinition evaluate_instruction\n\t(instruction: Instruction)\n\t(state: MachineState)\n: option (string * nat) :=\n\tmatch instruction with\n\t| Inst_Add dest op1 op2 =>\n\t\tmatch (evaluate_operand op1 state), (evaluate_operand op2 state) with\n\t\t| (Some op1value), (Some op2value) => Some (dest, (op1value + op2value))\n\t\t| _, _ => None\n\t\tend\n\tend\n.\n\n(*proofs that evaluate_instruction returns Some if wp is satisfied*)\n\nDefinition step_instruction\n\t(instruction: Instruction)\n\t(state: MachineState)\n: option MachineState :=\n\tmatch evaluate_instruction instruction state with\n\t| None => None\n\t(*| Some (dest, value) => Some (insert dest value state.(local_identifiers))*)\n\t| Some (dest, value) => Some (insert dest value state)\n\tend\n.\n(*proofs that step_instruction returns Some if wp is satisfied*)\n\nFixpoint step_instructions\n\t(instructions: list Instruction)\n\t(state: MachineState)\n: option MachineState :=\n\tmatch instructions with\n\t| instruction :: rest_instructions => match step_instruction instruction state with\n\t\t| None => None\n\t\t| Some next_state => step_instructions rest_instructions next_state\n\t\tend\n\t| [] => Some state\n\tend\n.\n(*proofs that step_instructions returns Some if \"steps\" wp is satisfied*)\n\nInductive TerminatorInstruction: Type :=\n\t| Branch (label: string)\n\t| BranchIf (condition: Operand) (if_label else_label: string)\n.\n\nDefinition evaluate_terminator terminator state :=\n\tmatch terminator with\n\t| Branch label => Some label\n\t| BranchIf condition if_label else_label => match evaluate_operand condition state with\n\t\t| None => None\n\t\t| Some condition_nat => Some (if Nat.eq_dec condition_nat 1 then if_label else else_label)\n\t\tend\n\tend\n.\n(*proofs that evaluate_terminator returns Some if terminator wp is satisfied*)\n\nRecord BasicBlock: Type := {\n\tsequential: list Instruction;\n\tterminator: TerminatorInstruction;\n}.\n\nRecord ExecutableProgram: Type := {\n\tstarting_block: BasicBlock;\n\tprogram: stringmap BasicBlock;\n}.\n\nRecord ProgramState: Type := {\n\tprogram: ExecutableProgram;\n\tcurrent_label: string;\n\tcurrent_index: nat;\n}.\n\nDefinition step_block block state :=\n\tmatch step_instructions block.(sequential) state with\n\t| None => None\n\t| Some stepped_state => match evaluate_terminator block.(terminator) stepped_state with\n\t\t| None => None\n\t\t| Some next_label => Some (next_label, stepped_state)\n\t\tend\n\tend\n.\n(*proofs that step_block returns Some if overall wp is satisfied*)\n\nFrom iris.base_logic.lib Require Import gen_heap.\n\nClass magmideGS Σ := MagmideGS {\n\t(*magmideGS_invGS : invGS_gen HasLc Σ;*)\n\tmagmideGS_gen_heapGS :> gen_heapGS string nat Σ;\n\t(*magmideGS_inv_heapGS :> inv_heapGS string nat Σ;*)\n}.\n\n\nSection magmide.\n\tContext `{!magmideGS Σ}.\n\n\tDefinition state_interpretation (machine_state: MachineState): iProp Σ :=\n\t\t(*gen_heap_interp machine_state.(local_identifiers).*)\n\t\tgen_heap_interp machine_state.\n\n\tNotation \"'%' var_name '==' value\" := (mapsto (L:=string) (V:=nat) var_name (DfracOwn 1) value)\n\t\t(at level 20, format \"'%' var_name '==' value\"): bi_scope.\n\n\tNotation spec pre_condition program post_condition :=\n\t\t([pre_condition; program; post_condition])\n\t\t(only parsing).\n\n\tDefinition one_add_program := [\n\t\t(Inst_Add \"one_add\" (Identifier \"arg\") (Literal 5))\n\t].\n\tOpen Scope bi_scope.\n\tTheorem test__one_add_program val:\n\t\tspec (%\"arg\"==val) one_add_program (%\"one_add\"==(val + 5)).\n\tProof.\n\n\tQed.\n\n\n\tDefinition two_add_program := [\n\t\t(Inst_Add \"two_add\" (Identifier \"arg\") (Literal 1));\n\t\t(Inst_Add \"two_add\" (Identifier \"arg\") (Literal 1))\n\t].\n\tTheorem test__two_add_program arg:\n\t\t{{ %arg==arg }} two_add_program {{ %two_add==arg + 2 }}.\n\tProof.\n\n\tQed.\n\nEnd magmide.\n\n\n\n\n\n\nFrom iris.algebra Require Export ofe.\nGlobal Instance MachineState_inhabited : Inhabited MachineState :=\n\tpopulate {| local_identifiers := inhabitant |}.\n\n(*Canonical Structure MachineStateO := leibnizO MachineState.*)\n\n\n\n(*Notation Mask := coPset.*)\n\nSection Sized.\n\tContext {size: nat}.\n\n\tNotation register := (fin size).\n\n\tInductive Operand: Type :=\n\t\t| Literal (n: nat)\n\t\t| Register (r: register)\n\t.\n\n\n\t(*\n\t\tIt seems pretty obvious I need to break instructions into \"sequential\" and \"terminator\" versions (kinda like llvm), and then have two layers of assertions, each with their own \"later\" concept:\n\t\t\t- a layer for reasoning purely about sequential \"segments\" of instructions (the body of a basic block)\n\t\t\t- a layer for reasoning about graphs of labeled blocks. this layer is basically where we can encode some concept of recursion, and where we use the lob induction iris provides to make it possible to reason about loops and such\n\n\t\twe have one layer of weakest preconditions (or just preconditions like in the low level code papers?) that moves sequential statements along\n\t\tthen we have a \"block\" structure that contains a list of sequential instructions and a single terminator instruction\n\t\tthen we have an \"execute block\" function that takes a machine state (that doesn't include an instruction pointer for now?) and transforms it according to the sequential segment and then returns the transformed state and an optional next label, with None meaning execution is finished\n\t*)\n\n\tInductive Instruction :=\n\t\t| Instruction_Exit\n\t\t| Instruction_Move (src: Operand) (dest: register)\n\t\t| Instruction_Add (val: Operand) (dest: register)\n\n\t\t| Instruction_Jump (to: nat)\n\t\t| Instruction_BranchEq (a: Operand) (b: Operand) (to: nat)\n\t\t| Instruction_BranchNeq (a: Operand) (b: Operand) (to: nat)\n\n\t\t(*| Instruction_Store (src: Operand) (dest: Operand)*)\n\t\t(*| Instruction_Load (src: Operand) (dest: register)*)\n\t.\n\tHint Constructors Instruction: core.\n\n\n\tRecord MachineState := machine_state {\n\t\tinstruction_pointer: nat;\n\t\t(*registers: (vec nat size);*)\n\t\tregisters: (gmap nat nat);\n\t\tprogram: list Instruction\n\t}.\n\n\tNotation current_instruction s :=\n\t\t(lookup s.(instruction_pointer) s.(program)) (only parsing).\n\n\tNotation incr s :=\n\t\t(S s.(instruction_pointer)) (only parsing).\n\n\tNotation get cur reg :=\n\t\t(cur.(registers) !!! reg) (only parsing).\n\n\tNotation update s dest val :=\n\t\t(vinsert dest val s.(registers)) (only parsing).\n\n\tNotation make_next cur next_instruction_pointer next_registers :=\n\t\t(machine_state next_instruction_pointer next_registers cur.(program))\n\t\t(only parsing).\n\n\tDefinition eval_operand\n\t\t(cur: MachineState) (operand: Operand)\n\t: nat :=\n\t\tmatch operand with\n\t\t| Literal n => n\n\t\t| Register r => (cur.(registers) !!! r)\n\t\tend\n\t.\n\n\tInductive Step: MachineState -> MachineState -> Prop :=\n\t\t| Step_Move: forall cur src dest,\n\t\t\t(current_instruction cur) = Some (Instruction_Move src dest)\n\t\t\t-> Step cur (make_next cur\n\t\t\t\t(incr cur)\n\t\t\t\t(update cur dest (eval_operand cur src))\n\t\t\t)\n\t\t(*$s==v ∗ $d==? ∗ <later>($s==v ∗ $d==? -∗ Q d v) <entails> wp Move $r $d {Q d v}*)\n\n\t\t| Step_Add: forall cur val dest,\n\t\t\t(current_instruction cur) = Some (Instruction_Add val dest)\n\t\t\t-> Step cur (make_next cur\n\t\t\t\t(incr cur)\n\t\t\t\t(update cur dest ((eval_operand cur val) + (get cur dest)))\n\t\t\t)\n\n\t\t| Step_Jump: forall cur to,\n\t\t\t(current_instruction cur) = Some (Instruction_Jump to)\n\t\t\t-> Step cur (make_next cur to cur.(registers))\n\n\t\t| Step_BranchEq_Yes: forall cur a b to,\n\t\t\t(current_instruction cur) = Some (Instruction_BranchEq a b to)\n\t\t\t-> a = b\n\t\t\t-> Step cur (make_next cur to cur.(registers))\n\n\t\t| Step_BranchEq_No: forall cur a b to,\n\t\t\t(current_instruction cur) = Some (Instruction_BranchEq a b to)\n\t\t\t-> ~(a = b)\n\t\t\t-> Step cur (make_next cur (incr cur) cur.(registers))\n\n\t\t| Step_BranchNeq_Yes: forall cur a b to,\n\t\t\t(current_instruction cur) = Some (Instruction_BranchNeq a b to)\n\t\t\t-> ~(a = b)\n\t\t\t-> Step cur (make_next cur to cur.(registers))\n\n\t\t| Step_BranchNeq_No: forall cur a b to,\n\t\t\t(current_instruction cur) = Some (Instruction_BranchNeq a b to)\n\t\t\t-> a = b\n\t\t\t-> Step cur (make_next cur (incr cur) cur.(registers))\n\t.\n\tHint Constructors Step: core.\n\n\t(*theorems about individual instructions, such as that exit never takes a step*)\n\n\t(*use the Chain prop to think about traces*)\n\n\t(*theorem that any trace with Exit at the top never continues*)\n\n\t(*theorems lifting list operators for Trace*)\n\n\tClass magmideGS (Σ: gFunctors) := MagmideG {\n\t\t(*magmideGS_invGS :> invGS_gen HasNoLc Σ;*)\n\t\t(*magmideGS_gen_heapG :> gen_heapGS nat nat Σ;*)\n\t\tmagmideGS_ghost_mapG :> ghost_mapGS nat nat Σ;\n\t}.\n\tGlobal Opaque magmide_invGS.\n\tGlobal Arguments MagmideG {Σ}.\n\n\t(*gen_heap_interp state.(heap)*)\n\t(*leads to*)\n\t(*ghost_map_auth (gen_heap_name hG) 1 state.(heap)*)\n\t(*leads to*)\n\t(*Record state : Type := {\n\t\theap: gmap loc (option val);\n\t}.*)\n\n\tDefinition state_interpretation state := gen_heap_interp state.(heap) (*∗ steps_auth step_cnt*).\n\n\n\n\n\tDefinition wp_pre\n\t\t`{!magmideGS Σ}\n\t\t(wp: Mask -d> MachineState -d> (MachineState -d> iPropO Σ) -d> iPropO Σ)\n\t: Mask -d> MachineState -d> (MachineState -d> iPropO Σ) -d> iPropO Σ := fun mask cur Post =>\n\t\tmatch current_instruction cur with\n\t\t| None => |={mask}=> Post cur\n\t\t| Some Instruction_Exit => |={mask}=> Post cur\n\t\t| Some instruction => state_interpretation cur\n\t\t\t-∗ |={mask,∅}=> (\n\t\t\t\tCanStep cur\n\t\t\t\t∗ (forall next, Step cur next -∗ |={∅,mask}=> ▷ (\n\t\t\t\t\tstate_interpretation next\n\t\t\t\t\t∗ wp mask next Post\n\t\t\t\t))\n\t\t\t)\n\t\tend%I.\n\n\tLocal Instance wp_pre_contractive `{!magmideGS Σ}: Contractive wp_pre.\n\tProof.\n\t\trewrite /wp_pre /= ⇒ n wp wp' Hwp E e1 Φ.\n\t\tdo 25 (f_contractive || f_equiv).\n\t\tinduction num_laters_per_step as [|k IH]; simpl.\n\t\t- repeat (f_contractive || f_equiv); apply Hwp.\n\t\t- by rewrite -IH.\n\tQed.\n\n\t(*probably have to define our own Wp typeclass :( *)\n\tClass Wp :=\n\t\twp: Mask -> MachineState -> (MachineState -> iPropO Σ) -> iPropO Σ.\n\tGlobal Arguments wp {_ _ _ _ _} _ _ _%E _%I.\n\tGlobal Instance: Params (@wp) 8 := {}.\n\n\tLocal Definition wp_def `{!magmideGS Σ}: Wp := fixpoint wp_pre.\n\tLocal Definition wp_aux: seal (@wp_def). Proof. by eexists. Qed.\n\tDefinition wp' := wp_aux.(unseal).\n\tGlobal Arguments wp' {hlc Λ Σ _}.\n\tGlobal Existing Instance wp'.\n\tLocal Lemma wp_unseal `{!magmideGS Σ}: wp = @wp_def hlc Λ Σ _.\n\tProof. rewrite -wp_aux.(seal_eq) //. Qed.\n\nEnd Sized.\n\n\n(*\n\ndefine the wp recursively, referring to a state interpretation (that includes a reference to Step?) and using later and update modalities to link step indexing to steps in the program\n\nthen prove a bunch of consequences of the wp, such as a wp for each individual instruction or specialized versions of operators like * and -*\n\nthen do things like prove the wp is non expansive?\n\tTCEq (to_val e) None ->\n\tProper (pointwise_relation _ (dist_later n) ==> dist n) (wp (PROP:=iProp Σ) s E e).\n\nthat it preserves equivalences?\n\tProper (pointwise_relation _ (≡) ==> (≡)) (wp (PROP:=iProp Σ) s E e).\n\n\n\n\n\nthen higher level concepts, such as being able to form wps for blocks of code instead of individual instructions\n\n\n\n\n\nI want to get to the point where I can:\n\n- verify a simple straight line program that does something like adding four numbers\n\n- verify a program with a branch implemented loop\n\n- add input and output signals similar to the kinds of simple signals in a chip like a 6502, and use them to verify something like a hello world program\n\n\nthen is it time to start figuring out and building systems for trackable effects?\n\n*)\n"
  },
  {
    "path": "theory/playground.v",
    "content": "Inductive Trivial: Prop :=\n\t| TrivialCreate.\n\nDefinition Trivial_manual: Trivial := TrivialCreate.\n\nDefinition Trivial_inductive_manual (P: Prop) (p: P) (evidence: Trivial): P :=\n\tmatch evidence with\n\t| TrivialCreate => p\n\tend.\n\n\nInductive Eq {T: Type}: T -> T -> Prop :=\n\t| EqCreate: forall t, Eq t t.\n\nDefinition Eq_inductive_manual\n\t(T: Type) (P: T -> T -> Prop)\n\t(f: forall t: T, P t t) (l r: T)\n\t(evidence: Eq l r)\n: P l r :=\n\tmatch evidence in (Eq l r) return (P l r) with\n\t| EqCreate t => f t\n\tend.\n\n\nInductive And (P Q: Prop): Prop :=\n\t| AndCreate (Ph: P) (Pq: Q): And P Q.\n\nDefinition And_inductive_manual\n\t(P Q R: Prop) (f: P -> Q -> R) (evidence: And P Q)\n: R :=\n\tmatch evidence with\n\t| AndCreate _ _ p q => f p q\n\tend.\n\n\nDefinition And_left_manual (P Q: Prop) (evidence: And P Q): P :=\n\tmatch evidence with\n\t| AndCreate _ _ p q => p\n\tend.\n\nDefinition And_right_manual (P Q: Prop) (evidence: And P Q): Q :=\n\tmatch evidence with\n\t| AndCreate _ _ p q => q\n\tend.\n\nInductive Or (P Q: Prop): Prop :=\n\t| OrLeft: P -> Or P Q\n\t| OrRight: Q -> Or P Q.\n\nDefinition Or_inductive_manual\n(P Q R: Prop) (f_left: P -> R) (f_right: Q -> R) (evidence: Or P Q)\n: R :=\n\tmatch evidence with\n\t| OrLeft _ _ l => f_left l\n\t| OrRight _ _ r => f_right r\n\tend.\n\n\nInductive Bool: Type :=\n\t| BoolTrue\n\t| BoolFalse.\n\nDefinition Bool_inductive_manual\n\t(P: Bool -> Prop) (P_BoolTrue: P BoolTrue) (P_BoolFalse: P BoolFalse)\n: forall (b: Bool), P b :=\n\tfun b =>\n\t\tmatch b with\n\t\t| BoolTrue => P_BoolTrue\n\t\t| BoolFalse => P_BoolFalse\n\t\tend.\n\nDefinition Eq_manual: Eq BoolTrue BoolTrue := EqCreate BoolTrue.\n\nFail Definition Eq_manual_bad: Eq BoolTrue BoolFalse := EqCreate BoolTrue.\nFail Definition Eq_manual_bad: Eq BoolTrue BoolFalse := EqCreate BoolFalse.\nFail Definition Eq_manual_bad: Eq BoolFalse BoolTrue := EqCreate BoolTrue.\nFail Definition Eq_manual_bad: Eq BoolFalse BoolTrue := EqCreate BoolFalse.\n\nInductive Impossible: Prop := .\n\nDefinition BoolTrue_not_BoolFalse_manual: (Eq BoolTrue BoolFalse) -> Impossible :=\n\tfun wrong_eq =>\n\t\tmatch wrong_eq with\n\t\t| EqCreate t => match t with\n\t\t\t| BoolTrue => TrivialCreate\n\t\t\t| BoolFalse => TrivialCreate\n\t\t\tend\n\t\tend.\n\nDefinition Not (P: Prop) := P -> Impossible.\n\nDefinition not_impossible_manual: Not Impossible :=\n\tfun impossible => match impossible with end.\n\nDefinition ex_falso_quodlibet: forall (P: Prop), Impossible -> P :=\n\tfun (P: Prop) (impossible: Impossible) => match impossible return P with end.\n\nDefinition BoolTrue_not_BoolFalse_manual_full: Not (Eq BoolTrue BoolFalse) :=\n\tfun wrong_eq =>\n\t\tmatch wrong_eq in (Eq l r)\n\t\treturn\n\t\t\tmatch l, r with\n\t\t\t| BoolTrue, BoolTrue => Trivial\n\t\t\t| BoolFalse, BoolFalse => Trivial\n\t\t\t| _, _ => Impossible\n\t\t\tend\n\t\twith\n\t\t| EqCreate t => match t with\n\t\t\t| BoolTrue => TrivialCreate\n\t\t\t| BoolFalse => TrivialCreate\n\t\t\tend\n\t\tend.\n\nPrint BoolTrue_not_BoolFalse_manual_full.\n\n\nInductive day: Type :=\n\t| monday\n\t| tuesday\n\t| wednesday\n\t| thursday\n\t| friday\n\t| saturday\n\t| sunday.\n\nDefinition next_weekday (d: day): day :=\n\tmatch d with\n\t| monday => tuesday\n\t| tuesday => wednesday\n\t| wednesday => thursday\n\t| thursday => friday\n\t| friday => monday\n\t| saturday => monday\n\t| sunday => monday\n\tend.\n\nCompute (next_weekday friday).\n(* ==> monday : day *)\nCompute (next_weekday (next_weekday saturday)).\n(* ==> tuesday : day *)\n\nDefinition test_next_weekday_manual: Eq (next_weekday saturday) monday :=\n\tEqCreate monday.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n(* Counterexample by Thierry Coquand and Christine Paulin\n\t Translated into Coq by Vilhelm Sjöberg *)\n\n(* NotPos represents any positive, but not strictly positive, operator. *)\n(*Definition NotPos (a: Type) := (a -> Prop) -> Prop.*)\n\n(* If we were allowed to form the inductive type\n\t\t Inductive A: Type :=\n\t\t\t createA: NotPos A -> A.\n\t then among other things, we would get the following. *)\nAxiom A: Type.\n(*Axiom createA: NotPos A -> A.*)\nAxiom createA: ((A -> Prop) -> Prop) -> A.\n(*Axiom matchA: A -> NotPos A.*)\nAxiom matchA: A -> ((A -> Prop) -> Prop).\n(*Axiom roundtrip_same: forall (a: NotPos A), matchA (createA a) = a.*)\nAxiom roundtrip_same: forall (a: ((A -> Prop) -> Prop)), matchA (createA a) = a.\n\n(* In particular, createA is an injection. *)\nLemma createA_injective: forall p p', createA p = createA p' -> p = p'.\nProof.\n\tintros.\n\tassert (matchA (createA p) = (matchA (createA p'))) as H1 by congruence.\n\trewrite roundtrip_same in H1.\n\trewrite roundtrip_same in H1.\n\tassumption.\n\t(*now repeat rewrite roundtrip_same in H1.*)\nQed.\n\n(* However, ... *)\n\n(* Proposition: For any type A, there cannot be an injection\n\t from NotPos(A) to A. *)\n\n(* For any type T, there is an injection from T to (T->Prop),\n\t which is λt1.(λt2.t1=t2) . *)\nDefinition type_to_prop {T:Type}: T -> (T -> Prop) :=\n\tfun t1 t2 => t1 = t2.\n\nLemma type_to_prop_injective: forall T (t t': T), type_to_prop t = type_to_prop t' -> t = t'.\nProof.\n\tintros.\n\tassert (type_to_prop t t = type_to_prop t' t) as H1 by congruence.\n\tcompute in H1.\n\tsymmetry.\n\trewrite <- H1.\n\treflexivity.\nQed.\n\n(* Hence, by composition, we get an injection prop_to_type from A->Prop to A. *)\nDefinition prop_to_type: (A -> Prop) -> A\n\t:= fun p => createA (type_to_prop p).\n\nLemma prop_to_type_injective: forall p p', prop_to_type p = prop_to_type p' -> p = p'.\nProof.\n\tunfold prop_to_type. intros.\n\tapply createA_injective in H. apply type_to_prop_injective in H. assumption.\nQed.\n\n(* We are now back to the usual Cantor-Russel paradox. *)\n(* We can define *)\nDefinition AProp: A -> Prop\n\t(*the user gives us some `a: A`*)\n\t:= fun a =>\n\t\t(*and then we ask them to prove that some prop exists which:*)\n\t\t(*- **is the same as the object `a` they just gave us** *)\n\t\t(*- doesn't hold true for `a` *)\n\t\t(*these things together ask them to give us a prop that isn't true of itself!*)\n\t\texists (P: A -> Prop), prop_to_type P = a /\\ ~P a.\n\t\t(*this shouldn't be possible!*)\n\n\nDefinition a0: A := prop_to_type AProp.\n\n(* We have (AProp a0) iff ~(AProp a0) *)\nLemma bad: (AProp a0) <-> ~(AProp a0).\nProof.\nsplit.\n\t*\n\t\tunfold not.\n\t\t(*in this first case we've assumed aprop, but that's a problem because aprop *carries inside itself* a proof that there's some prop that isn't true of the thing we passed in, but also a proof that the thing we passed in is the same as the prop that isn't true!*)\n\t\t(*this is the core problem of non-strict positivity, that it allows an assumption we've made, an argument to our function, to already contain a proof of its own falsity*)\n\t\t(*this is definitely a kind of recursion, but it's *constructor* recursion rather than computational recursion *)\n\t\tintros [P [H1 H2]] H.\n\t\tunfold not in H2.\n\t\tunfold a0 in H1.\n\t\tapply prop_to_type_injective in H1. rewrite H1 in H2.\n\t\tauto.\n\t*\n\t\t(*this second case is the more straightforward one that assumes falsity already, so we just have to show that the proof of falsity actually applies to itself*)\n\t\tintros.\n\t\texists AProp.\n\t\tsplit; try assumption.\n\t\tunfold a0. reflexivity.\nQed.\n\n(* Hence a contradiction. *)\nTheorem worse: False.\n\tpose bad. tauto.\nQed.\n\nPrint worse.\n\n(*to construct yes, all you have to do is construct no*)\n(*and to construct no, you only have to construct a *function* that accepts (assumes) yes *)\n(*and you've done this in an environment where its possible to use the contradictory theorem to prove False once you've hit that level*)\n\n(*is that the core idea of non-strict positivity? that it allows you to create situations where in order to create a *thing* you merely have to create a function that accepts the thing? with a side trip to creating ~thing?*)\n\n\nDefinition worse_manual: False :=\n\tand_ind (fun (forward: AProp a0 -> ~AProp a0) (backward: ~AProp a0 -> AProp a0) =>\n\t\tlet aprop: AProp a0 := (\n\t\t\tlet not_aprop: ~AProp a0 := (\n\t\t\t\tfun aprop: AProp a0 =>\n\t\t\t\t\tlet not_aprop: ~AProp a0 := forward aprop in\n\t\t\t\t\tlet ohno: False := not_aprop aprop in\n\t\t\t\t\tFalse_ind False ohno\n\t\t\t): ~AProp a0 in\n\t\t\tbackward not_aprop\n\t\t) in\n\t\t(fun aprop: AProp a0 =>\n\t\t\tlet not_aprop: ~AProp a0 := forward aprop in\n\t\t\tlet ohno: False := not_aprop aprop in\n\t\t\tFalse_ind False ohno\n\t\t) aprop\n\t) bad.\n\nTheorem worse_full: False.\n\tpose bad. destruct bad as [forward backward]. clear i.\n\tassert (AProp a0) by (apply backward; unfold not; intros; apply forward; assumption).\n\tapply forward. assumption.\n\tassumption.\nQed.\n"
  },
  {
    "path": "theory/utils.v",
    "content": "Add LoadPath \"/home/blaine/lab/cpdtlib\" as Cpdt.\nSet Implicit Arguments. Set Asymmetric Patterns.\nRequire Import List Cpdt.CpdtTactics.\nFrom stdpp Require Import base fin vector options.\nImport ListNotations.\n\nLtac solve_crush := try solve [crush].\nLtac solve_assumption := try solve [assumption].\nLtac subst_injection H := injection H; intros; subst; clear H.\n\nNotation impossible := (False_rect _ _).\nNotation this item := (exist _ item _).\nNotation use item := (proj1_sig item).\n\nRequire Import theory.list_assertions.\n\n\nFrom Coq.Wellfounded Require Import Inclusion Inverse_Image.\n\nSection wf_transfer.\n\tVariable A B: Type.\n\tVariable RA: A -> A -> Prop.\n\tVariable RB: B -> B -> Prop.\n\tVariable f: A -> B.\n\n\tTheorem wf_transfer:\n\t\t(forall a1 a2, (RA a1 a2) -> (RB (f a1) (f a2)))\n\t\t-> well_founded RB\n\t\t-> well_founded RA.\n\tProof.\n\t\tintros H?;\n\t\tapply (wf_incl _ _ (fun a1 a2 => (RB (f a1) (f a2))) H);\n\t\tapply wf_inverse_image; assumption.\n\tQed.\n\nEnd wf_transfer.\n\n\nSection VectorIndexAssertions.\n\tContext {T: Type}.\n\tContext {size: nat}.\n\tNotation IndexPair := (prod (fin size) T) (only parsing).\n\n\tDefinition index_disjoint (index: fin size) :=\n\t\tfun (index_pair: IndexPair) => ((index_pair.1) <> index).\n\n\tInductive UnionAssertion (vector: vec T size): list IndexPair -> Prop :=\n\t\t| union_nil: UnionAssertion vector []\n\t\t| union_cons: forall pairs index value,\n\t\t\tUnionAssertion vector pairs\n\t\t\t-> (vector !!! index) = value\n\t\t\t-> Forall (index_disjoint index) pairs\n\t\t\t-> UnionAssertion vector ((index, value) :: pairs)\n\t.\n\n\t(*Theorem UnionAssertion_no_duplicate_indices: forall vector index_pairs,\n\t\tUnionAssertion vector index_pairs\n\t\t-> NoDup (map (fun index_pair => index_pair.1) index_pairs).\n\tProof. Qed.*)\n\t(*\n\t\twhile this is fancy and everything, we almost certainly don't need it, since we really want a higher level opaque theorem stating that if two assertions point to the same location in the same vector then we can derive False\n\t*)\n\nEnd VectorIndexAssertions.\n\n\nSection convert_subset.\n\tVariable T: Type.\n\tVariable P Q: T -> Prop.\n\tTheorem convert_subset: {t | P t} -> (forall t: T, P t -> Q t) -> {t | Q t}.\n\tProof. intros []; eauto. Qed.\nEnd convert_subset.\nArguments convert_subset {T} {P} {Q} _ _.\n\nNotation convert item := (convert_subset item _).\n\nNotation Yes := (left _ _).\nNotation No := (right _ _).\nNotation Reduce x := (if x then Yes else No).\n\nTheorem append_single_cons {T: Type}: forall (t: T) l, t :: l = [t] ++ l.\nProof. induction l; auto. Qed.\n\n\n\n\nTheorem valid_index_not_None {T} (l: list T) index:\n\tindex < (length l) -> (lookup index l) <> None.\nProof.\n\tintros ??%lookup_ge_None; lia.\nQed.\nTheorem valid_index_Some {T} (l: list T) index:\n\tindex < (length l) -> exists t, (lookup index l) = Some t.\nProof.\n\tintros ?%(lookup_lt_is_Some_2 l index);\n\tunfold is_Some in *; assumption.\nQed.\n(*lookup_lt_Some*)\nDefinition safe_lookup {T} index (l: list T):\n\tindex < (length l) -> {t | (lookup index l) = Some t}\n.\n\tintros ?%valid_index_not_None;\n\tdestruct (lookup index l) eqn:Hlook; try contradiction;\n\trewrite <- Hlook; apply (exist _ t Hlook).\nDefined.\n\nTheorem safe_lookup_In {T} index (l: list T) (H: index < length l):\n\tIn (use (safe_lookup l H)) l.\nProof.\n\tapply elem_of_list_In; destruct (safe_lookup l H); simpl;\n\tapply elem_of_list_lookup; exists index; assumption.\nQed.\n\nTheorem Forall_safe_lookup {T} (P: T -> Prop) l:\n\tForall P l <-> forall index (H: index < length l), P (use (safe_lookup l H)).\nProof.\n\tsplit.\n\t-\n\t\tintros; destruct (safe_lookup l _); simpl;\n\t\tapply (Forall_lookup_1 P l index); assumption.\n\t-\n\t\tintros ?Hfunc; apply Forall_lookup; intros index item Hlookup;\n\t\tspecialize (lookup_lt_Some l index item Hlookup) as Hindex;\n\t\tspecialize (Hfunc index Hindex);\n\t\tdestruct (safe_lookup l _) as [item' Hitem'] in Hfunc; simpl in Hfunc;\n\t\trewrite Hlookup in Hitem'; subst_injection Hitem'; assumption.\nQed.\n\nDefinition closer_to target: nat -> nat -> Prop :=\n\tfun next cur => (target - next) < (target - cur).\n(*Hint Unfold closer_to: core.*)\n\nTheorem closer_to_well_founded target: well_founded (closer_to target).\nProof.\n\tapply (well_founded_lt_compat nat (fun a => target - a)); intros; assumption.\nDefined.\n\nTheorem closer_to_reverse: forall target cur next,\n\t(target - next) < (target - cur) -> cur < next.\nProof. lia. Qed.\n\nTheorem closer_to_bounded_reverse: forall target cur next,\n\tcur < next -> cur < target -> (target - next) < (target - cur).\nProof. lia. Qed.\n\nDefinition closer_to_end {T} (arr: list T) := closer_to (length arr).\n\nTheorem closer_to_end_well_founded {T} (arr: list T): well_founded (closer_to_end arr).\nProof. apply closer_to_well_founded. Qed.\n\nTheorem numeric_capped_incr_safe total begin cap index:\n\ttotal = begin + cap\n\t-> 0 < cap\n\t-> index < begin\n\t-> S index < total.\nProof. lia. Qed.\n\nTheorem capped_incr_safe {T} (total begin cap: list T) index:\n\ttotal = begin ++ cap\n\t-> 0 < length cap\n\t-> index < length begin\n\t-> S index < length total.\nProof.\n\tintros Htotal Hcap Hindex;\n\tassert (Hlen: length total = (length begin) + (length cap))\n\t\tby solve [rewrite Htotal; apply app_length];\n\tapply (numeric_capped_incr_safe Hlen Hcap Hindex).\nQed.\n\nTheorem index_pairs_lookup_forward {A B}:\n\tforall (items: list A) (f: nat -> A -> B) item index,\n\tlookup index items = Some item -> lookup index (imap f items) = Some (f index item).\nProof.\n\tinduction items; intros ??[]?;\n\ttry solve [apply (IHitems (compose f S)); assumption];\n\tnaive_solver.\nQed.\n\nTheorem index_pairs_lookup_back {A B}:\n\tforall (items: list A) (f: nat -> A -> B) item index,\n\t(forall index i1 i2, f index i1 = f index i2 -> i1 = i2)\n\t-> lookup index (imap f items) = Some (f index item)\n\t-> lookup index items = Some item.\nProof.\n\tinduction items; intros ??[]Hf?;\n\ttry solve [injection H; intros ?%Hf; naive_solver];\n\ttry solve [apply (IHitems (compose f S)); eauto];\n\tnaive_solver.\nQed.\n\nTheorem index_pair_equality {A B} (a: A) (b1 b2: B): (a, b1) = (a, b2) -> b1 = b2.\nProof. naive_solver. Qed.\n\nInductive partial (P: Prop): Type :=\n\t| Proven: P -> partial P\n\t(*| Falsified: ~P -> partial P*)\n\t| Unknown: partial P\n.\nNotation proven := (Proven _).\nNotation unknown := (Unknown _).\nNotation provenif test := (if test then proven else unknown).\n\nSection find_obligations.\n\tContext {T: Type}.\n\tVariable P: T -> Prop.\n\n\tTheorem forall_done_undone items done undone:\n\t\tPermutation items (done ++ undone)\n\t\t-> Forall P done -> Forall P undone\n\t\t-> Forall P items.\n\tProof.\n\t\tintros Hpermutation??; assert (Happ: Forall P (done ++ undone))\n\t\t\tby solve [apply Forall_app_2; assumption];\n\t\tsetoid_rewrite Hpermutation; assumption.\n\tQed.\n\n\tVariable compute_partial: forall t: T, partial (P t).\n\n\tDefinition split_by_maybe: forall items: list T, {\n\t\tpair | Permutation items (pair.1 ++ pair.2) /\\ Forall P pair.1\n\t}.\n\t\trefine (fix split_by_maybe items :=\n\t\t\tmatch items with\n\t\t\t| [] => this ([], [])\n\t\t\t| item :: items' =>\n\t\t\t\tlet (pair, H) := split_by_maybe items' in\n\t\t\t\tmatch (compute_partial item) with\n\t\t\t\t| Proven _ => this ((item :: pair.1), pair.2)\n\t\t\t\t| Unknown => this (pair.1, (item :: pair.2))\n\t\t\t\tend\n\t\t\tend\n\t\t);\n\t\tintros; split; simpl in *; try destruct H;\n\t\ttry solve [setoid_rewrite H; apply Permutation_middle]; auto.\n\tDefined.\n\n\tDefinition find_obligations_function: forall items, {\n\t\tobligations | Forall P obligations -> Forall P items\n\t}.\n\t\trefine (fun items =>\n\t\t\tlet (pair, H) := split_by_maybe items in\n\t\t\tthis pair.2\n\t\t);\n\t\tdestruct H; apply (forall_done_undone H); assumption.\n\tDefined.\n\n\tTheorem verify__find_obligations_function:\n\t\tforall items found, found = find_obligations_function items\n\t\t-> Forall P (use found) -> Forall P items.\n\tProof. intros ?[]; auto. Qed.\n\nEnd find_obligations.\n\nLtac find_obligations__helper P compute_partial items :=\n\tlet found := eval compute in (find_obligations_function P compute_partial items) in\n\tlet pf := eval compute in (proj2_sig found) in\n\tapply pf; apply Forall_fold_right; simpl; repeat split\n.\n\nLtac find_obligations P compute_partial items :=\n\tmatch goal with\n\t| |- Forall P items =>\n\t\tfind_obligations__helper P compute_partial items\n\n\t| |- forall item, In item items -> P item =>\n\t\tapply Coq.Lists.List.Forall_forall;\n\t\tfind_obligations__helper P compute_partial items\n\n\t| |- forall item, elem_of item items -> P item =>\n\t\tapply Forall_forall;\n\t\tfind_obligations__helper P compute_partial items\n\n\t| |- forall index def, index < length items -> P (nth index items def) =>\n\t\tapply Coq.Lists.List.Forall_nth;\n\t\tfind_obligations__helper P compute_partial items\n\n\t| |- forall index item, (lookup index items) = Some item -> P item =>\n\t\tapply Forall_lookup;\n\t\tfind_obligations__helper P compute_partial items\n\n\t| |- forall index, index < length items -> P (items !!! index) =>\n\t\tapply Forall_lookup_total;\n\t\tfind_obligations__helper P compute_partial items\n\n\t| |- forall index (H: index < length items), P (use (safe_lookup items H)) =>\n\t\tapply Forall_safe_lookup;\n\t\tfind_obligations__helper P compute_partial items\n\tend\n.\n\nModule test__find_obligations.\n\tDefinition P n := (n < 4 \\/ n < 6).\n\tDefinition compute_partial: forall n, partial (P n).\n\t\trefine (fun n => if (lt_dec n 4) then proven else unknown); unfold P; lia.\n\tDefined.\n\n\tDefinition items := [0; 1; 2; 4; 3; 2; 5].\n\tTheorem find_obligations__Forall: Forall P items.\n\tProof. find_obligations P compute_partial items; lia. Qed.\n\tTheorem find_obligations__forall_In: forall item, In item items -> P item.\n\tProof. find_obligations P compute_partial items; lia. Qed.\n\tTheorem find_obligations__forall_elem_of: forall item, elem_of item items -> P item.\n\tProof. find_obligations P compute_partial items; lia. Qed.\n\tTheorem find_obligations__forall_nth: forall index def, index < length items -> P (nth index items def).\n\tProof. find_obligations P compute_partial items; lia. Qed.\n\tTheorem find_obligations__forall_lookup: forall index item, (lookup index items) = Some item -> P item.\n\tProof. find_obligations P compute_partial items; lia. Qed.\n\tTheorem find_obligations__forall_lookup_total: forall index, index < length items -> P (items !!! index).\n\tProof. find_obligations P compute_partial items; lia. Qed.\n\tTheorem find_obligations__forall_safe_lookup:\n\t\tforall index (H: index < length items), P (use (safe_lookup items H)).\n\tProof. find_obligations P compute_partial items; lia. Qed.\nEnd test__find_obligations.\n\n\n(*\nInductive Result (T: Type) (E: Type): Type :=\n\t| Ok (value: T)\n\t| Err (error: E).\n\nArguments Ok {T} {E} _.\nArguments Err {T} {E} _.\n*)\n"
  }
]