Wishlist with SAC mapping in mind

Hi… I was thinking about options that could help my efficiency.  Most importantly, I’d love to be able to favourite a cube, and go back to see it at a later date to compare my mapping to the final result.  With the SAC cells this could be especially helpful, as on some cubes you kind of make a ‘leap of faith’ and hope that the bold steps you took were correct. Seeing it after it’s been finalised by the lab team would be very interesting and useful.

I’d also like to be able to mark a cube as especially difficult.  If I am really unsure about my result, but still want to throw it out there, then perhaps it could be flagged and given to the most experienced users to try, and for the Eyewire team to take a look at.  Of course - this system could be overused if new users flag everything they’re unsure about, so I’m not sure of the best way to implement it.

Just some thoughts for future development.  :)   If I think of anything else I’ll add it to the thread.

Thank you smalljude for the posting these great ideas.  I too would love to see if my speculative choices were the right ones.  It would help me learn what to look for.

And there have been times when I wanted to flag a cube.  Maybe that’s a perk that could be earned with points/cubes/time?  That way newbies couldn’t overuse flags.

Hope these are part of future upgrades.


Thanks ouiz!   :slight_smile:

It’s been mentioned before, but on the same wishlist would be a way to flag a cube as a possible merger (as compared to simply skipping it).  I’ve had a few on the latest SAC cell and although I’ve emailed support about them, it would be good to have an official way to report them that notes the exact cube #.

cheers :slight_smile:
Bookmarking cubes: Great idea!
"Possible merger" checkbox: Great idea!
Difficulty rating: Bad idea... ha, just kidding! Great idea! :-D

However, a suggestion/alternative for the last point:
Maybe an automated difficulty rating of cubes would eliminate the problem of overusing the suggested marking? Hm, in fact it might also eliminate the necessity of a "difficulty-flag"...

What I mean is: I assume the similarity between user submissions is a good indicator for rating the difficulty of a cube.
And @echo mentioned that there is a hidden accuracy-rating for every player (True accuracy - compared to staffs results rather than to other "mere players" results...)

With those two information, you could easily preferably(!) assign difficult cubes to advanced players, easier cubes to less accurate players and cubes, where even the advanced players go nuts (are in disagreement) to the staff members.

Edit 1: Advantage would be that the difficulty rating is objective: Hypothetical people with high self-esteem but really poor accuracy (those who never would use the "difficult-flag") are unknowingly influencing the difficulty rating of the cube as well as highly accurate, but unconfident eyewirers.

All good suggestions. I see that there needs to be two objectives:

1. To improve the accuracy for the scientists - ability to bookmark mergers etc.
2. To improve the ability of eyewirers. In that respect what we need most of all is feedback: to assess our own performance (and see where we went wrong or what we missed).

I imagine it would be a huge jump in database capacity to store everyone’s piece by piece submissions and thats not going to be practical but how about the following:
  • being able to recall the final results for previously completed cubes
  • representing the results as probabilities - color coding each bit according to the percentage of people marking that branch
Not only would you be able to see how you compared to the consensus (assuming you can recall how you marked it) but also its a last chance to catch that elusive branch which only one person actually found but was dismissed by the AI.

Great ideas everyone!

@smalljude, I hate to break it to you, but you fall into the category of "the most experienced users!"

We’ve already taken a few small steps in the directions you are describing.  Now that we have retro points for trailblazing, and a better view of user accuracy, we’ve started preferentially giving new tasks to our more accurate users.  The theory is that the ground truth that we use for scoring will be more accurate by the time we give the task to less accurate users so that the scoring system is more accurate feedback to them.

What you all are talking about are ways to help our top users get even better!  We’ve often considered some sort of review mode, but we haven’t come up with a system that we’re happy with.  Maybe some sort of flagging or favorite system would work, I’ll float it around the lab.

At the moment, we are working on more training/performance evaluation as a mechanism to “gate” the SACs.  At the moment, even the experts are making a lot of mistakes with them.  Hopefully we can help everyone get better at them.  By limiting who can actually contribute to them, we are hoping to speed up progress (because there are fewer incorrect validations being contributed).  We’re also working on ways to incentivize players working on SAC cells.  Hopefully we can generate enough enthusiasm to get people through the training and “level up” their tracing abilities.

We’ll let you know when we have more details.

Thanks so much Matt… I know you guys are working really hard and taking the things we say into consideration.  It’s pretty amazing to think we can participate in all this :slight_smile:  I don’t know of any other citizen science project where the users have such interaction with the science team - it’s really wonderful!   

If I am an experienced user, I’m certainly one who is capable of messing up big time (my recent SAC cell mapping is an example… yikes!). Although that’s personally very disappointing, it also gives me something to work towards, and is a reminder to not get complacent or over confident.

One thought I just had, which perhaps wouldn’t be too much work for you to implement  (if it interested you), is the ability to redo a cube immediately after you’ve submitted and seen your score. With the SAC cubes that I thought I was getting right, and then got 20 points for… I suspect it would have been really good for me to try it again from scratch and see if I improved.  Just a thought any way.

Meanwhile… back I go to the SAC… gulp… may the force be with me :stuck_out_tongue:

Not necessarily a good idea Jude, at least not from a science standpoint. Remember that the scoring only compares yours to whats been submitted before so anyone who finds a new branch differs from earlier submissions and is penalised for it. So if you were to redo the cube with the bad score you would have to delete the new branch. Of course maybe it wasn’t a new branch, maybe you screwed up … :-/

Whenever I get a ‘20’ cube I content myself with the thought - “Must have found a new branch”.

Yeah, that’s a good thought grizle.  In general, we are trying to keep each time anyone submits a cube as an independent observation.  That lets us do statistics and other things on the results you provide.  If we start cluing you into things before you do a cube, that can create interdependence issues.  For example, as grizle suggested, if we give you bad feedback then you do a cube wrong, are you bad, or is our feedback system bad?  In general, I like the idea of giving people more feedback after the fact so that they can improve.  I’m more wary of giving people feedback before they do a cube however…

Ah right, I see your points.

The cubes that I would have liked to redo would be perfect cases for marking as favourite. Then I’d return later and compare my answer to the consensus.  That would provide all the feedback I’d need - I’m not concerned with getting a better score, just doing better science!  :)