-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MixedOp implementation #3
Conversation
I will review this in ~2 weeks 👍 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Submitting an in-progress review as some initial feedback
@@ -337,4 +337,4 @@ | |||
}, | |||
"nbformat": 4, | |||
"nbformat_minor": 4 | |||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please revert this change 🙂
@@ -110,7 +114,7 @@ class QubitConverter: | |||
|
|||
def __init__( | |||
self, | |||
mapper: QubitMapper, | |||
mappers: Sequence[QubitMapper], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A scalable approach for the future will need to use a Mapping
instead of sequence here which then maps from operator type to mapper.
Alternatively, we could also envision inferring the compatible operator type from a mapper instance and then use a Collection
(i.e. set of mappers).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And arguable we should still support specifying only a single mapper (i.e. with | QubitMapper
). I think the code below does this, but the type hint does not reflect this
|
||
# convert all fermionics and store in list | ||
qubit_f_ops = [] | ||
max_f_reg_length = 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should actually check, that all result in identical length operators. Otherwise something is likely wrong
qubit_ops = [] | ||
for c in mix_coefficients: | ||
if len(c[0]) > 1: | ||
qubit_ops.append(qubit_f_ops[c[0][0][1]] ^ qubit_s_ops[c[0][1][1]] * c[1]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Accessing of the coefficients is a bit cryptic here. Probably more comments on that down below in the MixedOp
# convert all spins and store in list | ||
qubit_s_ops = [] | ||
max_s_reg_length = 0 | ||
for s_op in operators[SpinOp]: | ||
q_op = apply_map_sym(s_op, spin_mapper) | ||
max_s_reg_length = q_op.num_qubits if q_op.num_qubits > max_s_reg_length else max_s_reg_length | ||
qubit_s_ops.append(q_op) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know that this prototype is currently hardcoding fermionic and spin ops, but we should try to design this block of code as a type-independent standalone piece of code.
This could then also be combined with the identity construction below.
else: | ||
return self._mappers.map(second_q_op) | ||
|
||
def _map_multiple(self, second_q_op: MixedOp) -> PauliSumOp: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we generalize this implementation to not be map
specific but only deal with handling a MixedOp
, such that it can work with an arbitrary internal method such as _map
, _two_qubit_reduce
, etc. ?
from .mixed_op import MixedOp | ||
return MixedOp(([self, other], 1)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it should be possible to leave the base operators unchanged, by handling this scenario in the MixedOp.__rmatmul__
case 🤔
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure how we could do this for the case FermionicOp @ SpinOp
, because there isn't any MixedOp
involved yet, so MixedOp.__rmatmul__
would not be called, right?
if type(op) in self.ops: | ||
self.ops[type(op)].append(op) | ||
else: | ||
self.ops[type(op)] = [op] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You could simplify this by using a defaultdict(list)
for self.ops
) | ||
|
||
def __len__(self): | ||
return len(self.ops[FermionicOp]) + len(self.ops[SpinOp]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
return len(self.ops[FermionicOp]) + len(self.ops[SpinOp]) | |
return sum(len(val) for val in self.ops.values()) |
…o mixed-op Pull changes from origin
Hello! I will be branching this repo since the lattice gauge theory needs MixedOps. That would potentially make it easier for Max to review the code. This way I can also give feedback in case something breaks. |
Closing this in favor of qiskit-community#1188 |
Summary
This is a PR to expose my PoC for the
MixedOp
implementation and corresponding extension of theQubitConverter
class.Apart from the corresponding classes, there are 2 PoC files (not in the best location) to show the functionality that has been implemented. I recommend taking a look at this one.
Details and comments
Implemented:
MixedOp
class that supports:- Multiplication with scalar
-
MixedOp
additionrepr()
andlen()
methodsFermionicOp
to returnMixedOp
FermionicOp @ SpinOp
, any other order or classes will not work!QubitConverter
to supportMixedOp
Missing:
MixedOp
+MixedOp
repr()
, etcQubitConverter
, including proper logic to deal with symmetry reductions in theMixedOp
case.