Let's try MonadThrow/MonadCatch. It looks nice at a first glance. The
monad transformer stack contains only the ReaderT, less lifts are
required. Exception subtyping is easier, the user can (and should)
define custom error types and throw them. And it is still possible to
use pure error handling, if someone doesn't like runtime exceptions or
need to run a query in a pure environment.
Fixes#42.
After the last commit there were a few places needed to be adjusted to
support subscriptions. This is done and a test case is added.
It is important to implement subscriptions now, because they require
changes to the library API, and they are a big missing part to finish
the executor. When the executor is finished, we can start to provide
more stable API without breaking everything every release. Validation
and introspection shouldn't require much changes to the API; AST would
require some changes to report good errors after the validation - this
is one thing I can think of.
Fixes#5.
This is experimental support.
The implementation is based on conduit and is boring. There is a new
resolver data constructor that should create a source event stream. The
executor receives the events, pipes them through the normal execution
and puts them into the response stream which is returned to the user.
- Tests are missing.
- The executor should check field value resolver on subscription types.
- The graphql function should probably return (Either
ResponseEventStream Response), but I'm not sure about this. It will
make the usage more complicated if no subscriptions are involved, but
with the current API implementing subscriptions is more
difficult than it should be.
Returning resolvers from other resolvers isn't supported anymore. Since
we have a type system now, we define the resolvers in the object type
fields and pass an object with the previous result to them.
It makes using variables with queries more approachable, but some work
still has to be done.
- The type `Subs` should be renamed and moved out of `Schema`, together with
`AST.Core.Value` probably.
- Some kind of conversion should be possible from a user-defined input
type T to the Value. So the final HashMap should have a type like
`HashMap name a`, where a is an instance of a potential typeclass
InputType.
Fixes#18.
- `Language.GraphQL.Encoder` moved to `Language.GraphQL.AST.Encoder`.
- `Language.GraphQL.Parser` moved to `Language.GraphQL.AST.Parser`.
- `Language.GraphQL.Lexer` moved to `Language.GraphQL.AST.Lexer`.
- All `Language.GraphQL.AST.Value` data constructor prefixes were removed. The
module should be imported qualified.
- All `Language.GraphQL.AST.Core.Value` data constructor prefixes were removed.
The module should be imported qualified.
- `Language.GraphQL.AST.Transform` is now isn't exposed publically anymore.
These functions are from Language.GraphQL.Schema.
There are actually only two generic types in GraphQL: Scalars and objects.
Enum is a scalar value. According to the specification enums may be
serailized to strings. And in the current implementation they used
untyped strings anyway, so there is no point to have differently named
functions with the same implementation as their scalar counterparts.
It is not a schema (at least not a complete one), but a resolver list,
and the resolvers should be provided by the user separately, because the
schema can originate from a GraphQL document. Schema name should be free
to provide a data type for the real schema later.
This replaces the most usages of MonadPlus, which is not appropriate for
the resolvers, since a resolver is unambiguously chosen by the name (no
need for 'mplus'), and the resolvers are often doing IO.
Now the errors in the resolvers can be handled and 3 tests throwing
errors pass now. Another test fail but it requires distinguisching
nullable and non-nullable values.
One AST is meant to be a target parser and tries to adhere as much as possible
to the spec. The other is a simplified version of that AST meant for execution.
Also newtypes have been replaced by type synonyms and NonEmpty lists are being
used where it makes sense.
Aside of making the definition of Schemas easier, it takes care of
issues like nested aliases which previously wasn't possible. The naming
of the DSL functions is still provisional.